Today's highlight is Shoshana Sugerman! If the RCC had a lead researcher position, it would definitely be Shoshana. She is always working on two or three different high-impact #research projects at the same time. Whether it's #quantum machine learning, exploring the ethics of Generative #AI on human-facing #cybersecurity tasks, and now evaluating compiler-based mitigation techniques, she is always excited to be contributing to the expansion of the state-of-the-art in cybersecurity knowledge. #security #cybersec #infosec #informationsecurity #generativeai #ml #machinelearning #RPI Rensselaer Polytechnic Institute Rensselaer Polytechnic Institute Information Technology & Web Science
Rensselaer Cybersecurity Collaboratory’s Post
More Relevant Posts
-
Student @ Sri shakthi institute of engineering and technology | Python | Machine Learning Enthusiast | Venturing into Deep Learning
Excited to share my latest publication in the International Journal of Scientific Research in Engineering and Management (IJSREM)! 🎉 📄 Title: Phishing Website Detection Using Machine Learning In this study, we developed a method to detect phishing websites using machine learning. By analyzing URL features like length, special characters, subdomains, and keywords, we built a model that accurately identifies phishing URLs. This research highlights the methodology and the model's performance in real-world scenarios. Grateful for the support and guidance from my mentor, Ms. Gayathri N, and collaborators, Siva Sakthii U S and Janani Dhandayutham. 📖 Read the full paper here: https://lnkd.in/g-XMciH2 ISSN: 2582-3930 Impact Factor: 8.448 Volume 8 - Issue 07 July, 2024 #Research #Engineering #AI #MachineLearning #CyberSecurity #IJSREM #Publication #ScientificResearch
To view or add a comment, sign in
-
Student @ Sri Shakthi Institute of Engineering and Technology | BTech in Artificial Intelligence and Machine Learning | Python
Excited to share my latest publication in the International Journal of Scientific Research in Engineering and Management (IJSREM)! 🎉 📄 Title: Phishing Website Detection Using Machine Learning In this study, we developed a method to detect phishing websites using machine learning. By analyzing URL features like length, special characters, subdomains, and keywords, we built a model that accurately identifies phishing URLs. This research highlights the methodology and the model's performance in real-world scenarios. Grateful for the support and guidance from my mentor, Ms. Gayathri N, and collaborators, Pradeepa Murugesan and Janani Dhandayutham. 📖 Read the full paper here: https://lnkd.in/g-XMciH2 ISSN: 2582-3930 Impact Factor: 8.448 Volume 8 - Issue 07 July, 2024 #Research #Engineering #AI #MachineLearning #CyberSecurity #IJSREM #Publication #ScientificResearch
To view or add a comment, sign in
-
Senior Lecturer in Cybersecurity (Trustworthiness in AI) at RUL, United Kingdom| SMIEEE | IEEE IFS-TC | AFHEA | EPSRC-UKRI Grant Reviewer | AE of IEEE TNSM | Open to Research Collaboration | TurkPatent ID 2023/004922
We are thrilled to announce that our latest study, "Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks," is now accessible on Arxiv [https://lnkd.in/eZ4DF2MW] and is currently undergoing submission to a journal for publication. A collaborative effort with Imran Haider, Dr. Rahim Taheri, and Prof. Mauro Conti. This study investigated the impact of #datapoisoning attacks, specifically label flipping (LF) and feature poisoning (FP), in the context of Federated Learning (FL) within #computernetworks. FL allows #decentralized devices to jointly train a shared model without sharing raw data. #Adversarial samples were generated using the #CIC and #UNSW datasets, and the model accuracy was evaluated. The results show that LF attacks are detectable, whereas FP attacks are challenging to detect, demonstrating the severity of FP attacks in compromising model integrity within the FL. [If you have expertise in artificial intelligence for security and programming, I am keen to embark on new research collaborations. Do not hesitate to contact me via email at e.nowroozi@rave.ac.uk if our research resonates with you.] #FederatedLearning #DataPoisoning #ComputerNetworks #ResearchPublication #Collaboration #ArtificialIntelligence #Security #DataSecurity #ModelIntegrity #FLAttacks #LabelFlipping #FeaturePoisoning #AcademicResearch #AIforSecurity #Cybersecurity #InfoSec #MachineLearning #SecurityTech #ThreatDetection #SecurityAlgorithms #SecurityAutomation #SecurityAnalytics #AIinCybersecurity #SecurityResearch #SecuritySolutions #CyberDefense #SecurityInnovation #DataProtection #DigitalSecurity #AIandSecurity #SecureAI #SecurityFrameworks Ravensbourne University London
To view or add a comment, sign in
-
Aspiring Data Scientist | Research Assistant at University of Houston Main Campus | Ex - Accenture | Data Science | Machine Learning | Deep Learning | Python | AWS | SQL | Big Data | Gen AI | LLM
🚀 Advancing Machine Learning Security Against Cyber Attacks in my Research work🔐 As a Machine Learning Research Assistant at the University of Houston, I’ve been working on a project titled “Stateful Defenses for Machine Learning Models Are Not Yet Secure Against Black-box Attacks.” In this work, I’ve successfully developed and implemented several advanced attack methods, including Boundary Attack, Square Attack, NES Attack, QEBA Attack, HSJA Attack, and Surfree Attack on CIFAR10 and ImageNet datasets. Using the ResNet152 model, I evaluated the impact of these attacks on adversarial images. 💻📊 These types of cyber attacks are frequently employed by malicious actors, making it essential to study their effects on machine learning models. I collected top 2 logits and Margin Loss from each adversarial image to perform statistical analysis and visualization, uncovering patterns and performance insights across all 6 attacks. 📈 🔍 This analysis will lay the foundation for developing robust defense mechanisms against these attacks, and I’m excited about what’s to come! 💪🤖 You can read more on this topic in the paper: https://lnkd.in/gfr_tNBd 🔗 #MachineLearning #Cybersecurity #BlackboxAttacks #AI #AdversarialAttacks #Research #DataScience #ResNet #ImageClassification #DefenseMechanisms
To view or add a comment, sign in
-
Happy Monday! Today we are highlighting Kaitlin Kaii! Kaitlin brings much needed skills in the social sciences and #humanities to our #research work in the #Quantum and Generative #AI space. With #cybersecurity being a massively #multidisciplinary and #interdisciplinary endeavor, it is important that we emphasize all its aspects from the technical to the social. #cybersec #infosec #informationsecurity #security #GenAI #generativeai #socialscience Rensselaer Polytechnic Institute Rensselaer Polytechnic Institute Information Technology & Web Science
To view or add a comment, sign in
-
Senior Lecturer in Cybersecurity (Trustworthiness in AI) at RUL, United Kingdom| SMIEEE | IEEE IFS-TC | AFHEA | EPSRC-UKRI Grant Reviewer | AE of IEEE TNSM | Open to Research Collaboration | TurkPatent ID 2023/004922
Dear Researchers, I am thrilled to share our latest paper titled "Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks," accepted for publication in IEEE Transactions on Network and Service Management (IEEE TNSM) in collaboration with Mr. Imran Haider, Dr. Rahim Taheri, and Prof. Mauro Conti. In this study, we investigated the impact of #adversarial data modifications on Federated Learning systems, focusing on both the Client and Server sides. Our findings show that label flipping and #VagueGAN attacks have a minimal effect on server accuracy, as they are easily detected. However, feature poisoning attacks present a more insidious threat, subtly degrading model performance while maintaining high accuracy and attack success rates. This underscores the vulnerability of federated learning systems to such sophisticated attacks. To mitigate these risks, we explored the Random Deep Feature Selection defense, which randomizes server features of varying sizes (e.g., 50 to 400) during training. This technique has proven particularly effective against feature poisoning attacks, reducing their impact significantly. While we will share the full version of the paper soon (including details on the defense system), the initial version focusing on the attack methods is available here: https://lnkd.in/eZ4DF2MW #FederatedLearning #DataPoisoning #CyberSecurity #MachineLearning #AIResearch #FeaturePoisoning #AdversarialAttacks #DeepLearning #AI #CyberAttacks #NetworkSecurity #AIForSecurity #DefensiveAI #DataSecurity #CollaborativeResearch #AIProtection #MLSecurity #AcademicResearch #InnovationInAI #adversarialmachinelearning #GAN #VagueGAN #RDFS #AIVulnerability Ravensbourne University London SPRITZ - Security and Privacy Research Group Università degli Studi di Padova Bahcesehir University University of Portsmouth #IEEE #IEEETNSM IEEE IEEE Computer Society
To view or add a comment, sign in
-
"Excited to share that I've just wrapped up an insightful workshop on Cybersecurity and Artificial Intelligence in Computer Science Engineering! 🚀💻 Delving into cutting-edge topics, this workshop equipped me with valuable insights into safeguarding digital assets and leveraging AI for enhanced security measures. Grateful for the opportunity to learn and grow in this ever-evolving field! #Cybersecurity #AI #ComputerScience #Engineering #LearningAndGrowing"
To view or add a comment, sign in
-
I'm excited to share that our paper, "STFL: Utilizing a Semi-Supervised, Transfer-Learning, Federated-Learning Approach to Detect Phishing URL Attacks" has been accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) as part of IEEE WCCI, to be held at Pacifico Yokohama, Japan 🇯🇵, from June 30 to July 5, 2024. In this paper, we present a novel machine learning approach for phishing detection that combines semi-supervised learning, transfer learning, and federated learning. We train a Bi-LSTM autoencoder across decentralized edge devices containing unlabeled data. A centralized server collects the Bi-LSTM autoencoders and aggregates them into a global network using the FedAVG algorithm. The server then performs transfer learning to utilize the learned patterns from the global Bi-LSTM autoencoder networks and induce a classification model. Our experiments demonstrate that our proposed approach outperforms existing methods. I would like to thank all co-authors Prof. Yuval Elovici , Prof. Asaf Shabtai and Dr. Edita Grolman, for their help and guidance. Looking forward to presenting our paper at the conference. #IJCNN2024 #IEEEWCCI #DataScience #MachineLearning #CyberSecurity #FederatedLearning
To view or add a comment, sign in
-
Data Science Senior | Data Enthusiast | Machine Learning | Data Visualization | Analysis and Design | Graphics | Python | R(basic) | SQL | C++
Sharing the highlights of our last semester project, "Phishing Webpages Detection," for Data Mining Course under the supervision of Miss Eesha Tur Razia Babar ! Project Overview: Our project aimed to tackle the ever-growing threat of phishing attacks, which compromise sensitive information by misguiding as trustworthy entities. By employing advanced machine learning techniques, we developed a robust system to accurately distinguish between legitimate and malicious websites. Impact: Our project not only enhances the detection of phishing websites but also contributes to the broader effort of cybersecurity resilience. By providing a reliable classification framework, we aim to mitigate phishing threats and safeguard sensitive information in the digital landscape. Key Highlights: - Dataset: We merged two distinct datasets to focus on phishing websites, https://lnkd.in/dQCrdCJX - Techniques Used: We employed models like XGBoost, Neural Network, MLP Classification, Random Forest, Gradient Boosting, KNN, and Deep Neural Network. 💻 Implementation: - Feature engineering to enhance model accuracy. - Cross-validation to ensure model robustness. - Hyperparameter tuning for optimal model performance. - Anomaly detection to identify outliers and enhance classification accuracy. - Model interpretability techniques to understand feature importance and model decisions. I am incredibly grateful for the opportunity to collaborate with Muhammad Ahmad along with Farheen Akmal on this project. Looking forward to applying these skills and knowledge in future endeavors! Checkout the source code with proper implementation report. https://lnkd.in/dVpxiuan #DataMining #MachineLearning #PhishingDetection #MachineLearning #CyberSecurity #DataScience #Python
To view or add a comment, sign in
-
🎉 Exciting News! 🎉 I'm thrilled to announce that my paper, titled "An Efficient Fake Account Identification in Social Media Networks - Facebook and Instagram using NSGA-II Algorithm," has been accepted for publication in Neural Computing and Applications -Springer with IF 4.5 (Q1)! 🌟 This work presents a novel approach to identifying fake accounts on social media platforms, leveraging the power of the NSGA-II algorithm. The acceptance of this paper marks a significant milestone in our ongoing efforts to enhance online security and trustworthiness. A big thank you to everyone who supported and contributed to this research (Agoujil Said, Abdelaaziz Hessane, Dr. Anand Nayyar). I look forward to sharing the full publication with you all soon! #Research #AI #MachineLearning #SocialMedia #CyberSecurity #NeuralComputing #NSGAII #AcademicPublishing
To view or add a comment, sign in
507 followers