[1] VOIGT P, BUSSCHE A. The EU general data protection regulation (GDPR): a practical guide[M]. Cham: Springer, 2017.
[2] PARDAU S L. The California consumer privacy act: towards a European-style privacy regime in the United States[J]. J. Tech. L. & Poly, 2018, 23: 68.
[3] 中华人民共和国数据安全法[EB/OL]. (2021-06-10) [2024-04-23]. http://www.npc.gov.cn/c2/c30834/202106/t20210610_311888.html.
Data security law of the People??s Republic of China[EB/OL]. (2021-06-10) [2024-04-23]. http://www.npc.gov.cn/c2/c30834/202106/t20210610_311888.html.
[4] 个人信息保护法[EB/OL]. (2021-08-20) [2024-04-23]. https://www.gov.cn/xinwen/2021-08/20/content_5632486.htm.
Personal information protection law[EB/OL]. (2021-08-20) [2024-04-23]. https://www.gov.cn/xinwen/2021-08/20/content_5632486.htm.
[5] Federal Trade Commission. California company settles FTC allegations it deceived consumers about use of facial recognition in photo storage App[EB/OL]. [2024-04-23]. https://www.ftc.gov/news-events/news/press-releases/2021/01/california-company-settles-ftc-allegations-it-deceived-consumers-about-use-facial-recognition-photo.
[6] CHEN M, ZHANG Z K, WANG T H, et al. When machine unlearning jeopardizes privacy[C]//Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2021: 896-911.
[7] CAO Y Z, YANG J F. Towards making systems forget with machine unlearning[C]//Proceedings of the 2015 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2015: 463-480.
[8] BOURTOULE L, CHANDRASEKARAN V, CHOQUETTE-CHOO C A, et al. Machine unlearning[C]//Proceedings of the 2021 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2021: 141-159.
[9] BROPHY J, LOWD D. Machine unlearning for random forests[C]//Proceedings of the 38th International Conference on Machine Learning, Jul 18-24, 2021: 1092-1104.
[10] KOH P W, LIANG P. Understanding black-box predictions via influence functions[C]//Proceedings of the 34th International Conference on Machine Learning, Sydney, Aug 6-11, 2017: 1885-1894.
[11] GUO C, GOLDSTEIN T, HANNUN A Y, et al. Certified data removal from machine learning models[C]//Proceedings of the 37th International Conference on Machine Learning, Jul 13-18, 2020: 3832-3842.
[12] GIORDANO R, STEPHENSON W T, LIU R, et al. A swiss army infinitesimal jackknife[C]//Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, Naha, Apr 16-18, 2019 : 1139-1147.
[13] IZZO Z, SMART M A, CHAUDHURI K, et al. Approximate data deletion from machine learning models[C]//Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, Apr 13-15, 2021: 2008-2016.
[14] TARUN A K, CHUNDAWAT V S, MANDAL M, et al. Fast yet effective machine unlearning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(9): 13046-13055.
[15] CHEN M, GAO W Z, LIU G Y, et al. Boundary unlearning: rapid forgetting of deep networks via shifting the decision boundary[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 7766-7775.
[16] VILLARONGA E F, KIESEBERG P, LI T. Humans forget, machines remember: artificial intelligence and the right to be forgotten[J]. Computer Law & Security Review, 2018, 34(2): 304-313.
[17] SHINTRE S, ROUNDY K A, DHALIWAL J. Making machine learning forget[C]//Proceedings of the 7th Annual Privacy Forum on Privacy Technologies and Policy, Rome, Jun 13-14, 2019. Cham: Springer, 2019: 72-83.
[18] MERCURI S, KHRAISHI R, OKHRATI R, et al. An introduction to machine unlearning[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2209.00939.
[19] NGUYEN T T, HUYNH T T, NGUYEN P L, et al. A survey of machine unlearning[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2209.02299.
[20] SHAIK T B, TAO X H, XIE H R, et al. Exploring the landscape of machine unlearning: a comprehensive survey and taxonomy[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2305. 06360.
[21] XU J, WU Z H, WANG C, et al. Machine unlearning: solutions and challenges[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024, 8(3): 2150-2168.
[22] WARNECKE A, PIRCH L, WRESSNEGGER C, et al. Machine unlearning of features and labels[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2108.11577.
[23] GUO T, GUO S, ZHANG J W, et al. Efficient attribute unlearning: towards selective removal of input attributes from feature representations[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2202.13295.
[24] PARISI G I, KEMKER R, PART J L, et al. Continual life-long learning with neural networks: a review[J]. Neural Networks: The Official Journal of the International Neural Network Society, 2018, 113: 54-71.
[25] LIU B, LIU Q, STONE P. Continual learning and private unlearning[C]//Proceedings of the 2022 Conference on Lifelong Learning Agents, Montréal, Aug 22-24, 2022: 243-254.
[26] DWORK C. Differential privacy[M]//Encyclopedia of Cryptography and Security. Berlin, Heidelberg: Springer,2011: 338-340.
[27] DWORK C, ROTH A. The algorithmic foundations of differential privacy[J]. Foundations and Trends in Theoretical Computer Science, 2014, 9(3/4): 211-407.
[28] CHIEN E, PAN C, MILENKOVIC O. Certified graph unlearning[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2206. 09140.
[29] MAHADEVAN A, MATHIOUDAKIS M. Certifiable unlearning pipelines for logistic regression: an experimental study[J]. Machine Learning and Knowledge Extraction, 2022, 4(3): 591-620.
[30] BECKER A, LIEBIG T. Certified data removal in sum-product networks[C]//Proceedings of the 2022 IEEE International Conference on Knowledge Graph. Piscataway: IEEE, 2022: 14-21.
[31] MARCHANT N G, RUBINSTEIN B I P, ALFELD S. Hard to forget: poisoning attacks on certified machine unlearning [C]//Proceedings of the 2022 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2022: 7691-7700.
[32] SURIYAKUMAR V M, WILSON A C. Algorithms that approximate data removal: new results and limitations[C]// Advances in Neural Information Processing Systems 35, New Orleans, Nov 28-Dec 9, 2022: 18892-18903.
[33] NEEL S, ROTH A, SHARIFI-MALVAJERDI S. Descent-to-delete: gradient-based methods for machine unlearning[C]//Proceedings of the 32nd International Conference on Algorithmic Learning Theory, Mar 16-19, 2021: 931-962.
[34] YOON Y, NAM J, YUN H, et al. Few-shot unlearning by model inversion[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2205.15567.
[35] CHUNDAWAT V S, TARUN A K, MANDAL M, et al. Zero-shot machine unlearning[J]. IEEE Transactions on Information Forensics and Security, 2022(18): 2345-2354.
[36] HE Y Z, MENG G Z, CHEN K, et al. DeepObliviate: a powerful charm for erasing data residual memory in deep neural networks[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2105.06209.
[37] WU Y J, DOBRIBAN E, DAVIDSON S B. DeltaGrad: rapid retraining of machine learning models[C]//Proceedings of the 37th International Conference on Machine Learning, Jul 13-18, 2020: 10355-10366.
[38] GOLATKAR A, ACHILLE A, SOATTO S. Eternal sunshine of the spotless net: selective forgetting in deep networks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 9304-9312.
[39] THUDI A, SHUMAILOV I, BOENISCH F, et al. Bounding membership inference[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2202.12232.
[40] SOMMER D M, SONG L W, WAGH S, et al. Towards probabilistic verification of machine unlearning[EB/OL].[2024-04-23]. https://arxiv.org/abs/2003.04247.
[41] CHEN M, ZHANG Z K, WANG T H, et al. Graph unlearning[C]//Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2022: 499-513.
[42] CHEN C, SUN F, ZHANG M, et al. Recommendation unlearning[C]//Proceedings of the ACM Web Conference 2022. New York: ACM, 2022: 2768-2777.
[43] GINART A, GUAN M Y, VALIANT G, et al. Making AI forget you: data deletion in machine learning[C]//Advances in Neural Information Processing Systems 32, Vancouver, Dec 8-14, 2019: 3513-3526.
[44] YAN H N, LI X G, GUO Z Y, et al. ARCANE: an efficient architecture for exact machine unlearning[C]//Proceedings of the 31st International Joint Conference on Artificial Intelligence, Vienna, Jul 23-29, 2022: 4006-4013.
[45] SU N X, LI B C. Asynchronous federated unlearning[C]//Proceedings of the 2023 IEEE Conference on Computer Communications. Piscataway: IEEE, 2023: 1-10.
[46] SCHELTER S, GRAFBERGER S, DUNNING T. HedgeCut: maintaining randomised trees for low-latency machine unlearning[C]//Proceedings of the 2021 International Conference on Management of Data. New York: ACM, 2021: 1545-1557.
[47] LIN H W, CHUNG J W, LAO Y J, et al. Machine unlearning in gradient boosting decision trees[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2023: 1374-1383.
[48] KOH P W, ANG K S, TEO H, et al. On the accuracy of influence functions for measuring group effects[EB/OL]. [2024-04-23]. https://arxiv.org/abs/1905.13289.
[49] MAHADEVAN A, MATHIOUDAKIS M. Certifiable machine unlearning for linear models[EB/OL]. [2024-04-23].https://arxiv.org/abs/2106.15093.
[50] PESTE A, ALISTARH D, LAMPERT C H. SSSE: efficiently erasing samples from trained machine learning models[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2107.03860.
[51] GRAVES L, NAGISETTY V, GANESH V. Amnesiac machine learning[C]//Proceedings of the 2021 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2021: 11516-11524.
[52] LIU Y, FAN M Y, CHEN C, et al. Backdoor defense with machine unlearning[C]//Proceedings of the 2022 IEEE Conference on Computer Communications. Piscataway: IEEE, 2022: 280-289.
[53] CAO X Y, JIA J Y, ZHANG Z X, et al. Fedrecover: recovering from poisoning attacks in federated learning using historical information[C]//Proceedings of the 2023 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2023: 1366-1383.
[54] ELDAN R, RUSSINOVICH M. Who is Harry Potter? Approximate unlearning for LLMs[EB/OL]. [2024-04-23].https://arxiv.org/abs/2310.02238.
[55] ZHANG X L, WANG J Z, CHENG N, et al. Machine unlearning methodology based on stochastic teacher network[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2308.14322.
[56] JAECKEL L A. The infinitesimal jackknife[R]. Bell Labs, 1972.
[57] HAMPEL F. The influence curve and its role in robust estimation[J]. Journal of the American Statistical Association, 1974, 69: 383-393.
[58] BECKMAN R J, TRUSSELL H J. The distribution of an arbitrary studentized residual and the effects of updating in multiple regression[J]. Journal of the American Statistical Association, 1974, 69: 199-201.
[59] COOK R D. Detection of influential observation in linear regression[J]. Technometrics, 2000, 42: 65-68.
[60] SHOSTACK A. The boy who survived: removing Harry Potter from an LLM is harder than reported[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2403.12082.
[61] SALEM A, BHATTACHARYYA A, BACKES M, et al. Updates-leak: dataset inference and reconstruction attacks in online learning[C]//Proceedings of the 29th USENIX Security Symposium, Aug 12-14, 2020: 1291-1308.
[62] GUPTA V, JUNG C, NEEL S, et al. Adaptive machine unlearning[C]//Advances in Neural Information Processing Systems 34, Dec 6-14, 2021: 16319-16330.
[63] ZHANG Z M, TIAN M C, LI C G, et al. Poison neural network-based mmWave beam selection and detoxification with machine unlearning[J]. IEEE Transactions on Communications, 2022, 71(2): 877-892.
[64] THUDI A, JIA H R, SHUMAILOV I, et al. On the necessity of auditable algorithmic definitions for machine unlearning[C]//Proceedings of the 31st USENIX Security Symposium, Boston, Aug 10-12, 2022: 4007-4022.
[65] CHEN Y T, XIONG J, XU W H, et al. A novel online incremental and decremental learning algorithm based on variable support vector machine[J]. Cluster Computing, 2019, 22: 7435-7445.
[66] PAN C, SIMA J, PRAKASH S, et al. Machine unlearning of federated clusters[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2210.16424.
[67] PEARCE T, LEIBFRIED F, BRINTRUP A, et al. Uncertainty in neural networks: approximately Bayesian ensembling[C]//Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, Aug 26-28, 2020: 234-244.
[68] CHEN Y Z, ZHANG S Z, LOW K H. Near-optimal task selection for meta-learning with mutual information and online variational Bayesian unlearning[C]//Proceedings of the 2022 International Conference on Artificial Intelligence and Statistics, Mar 28-30, 2022: 9091-9113.
[69] LIU G Y, MA X Q, YANG Y, et al. FedEraser: enabling efficient client-level data removal from federated learning models[C]//Proceedings of the 2021 IEEE/ACM 29th International Symposium on Quality of Service. Piscataway: IEEE, 2021: 1-10.
[70] SCHELTER S, ARIANNEZHAD M, RIJKE M. Forget me now: fast and exact unlearning in neighborhood-based recommendation[C]//Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2023: 2011-2015.
[71] MAINI P, FENG Z L, SCHWARZSCHILD A, et al. TOFU: a task of fictitious unlearning for LLMs[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2401.06121.
[72] CHA S M, CHO S J, HWANG D, et al. Learning to unlearn: instance-wise unlearning for pre-trained classifiers[C]//Proceedings of the 2024 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2024: 11186-11194.
[73] BLANCO-JUSTICIA A, JEBREEL N, MANZANARES B, et al. Digital forgetting in large language models: a survey of unlearning methods[EB/OL]. [2024-04-23]. https://arxiv.org/abs/2404.02062. |