Is it illegal to collect website information through crawlers?

The risks brought by reptiles are mainly reflected in the following three aspects: violating the will of the website, for example, after the website takes anti-crawling measures, it forcibly breaks through its anti-crawling measures; Crawlers interfere with the normal operation of visited websites; Crawlers grab certain types of data or information protected by law. So as a reptile developer, how to avoid the bad luck of going to jail when using reptiles? Strictly abide by the robot protocol set by the website; While avoiding anti-crawler measures, you need to optimize your own code to avoid interfering with the normal operation of the visited website; When setting the crawling strategy, we should pay attention to coding and crawling the data that may constitute the work, such as video and music, or crawling the user-generated content in batches for some specific websites; When using and disseminating the captured information, we should check the captured content. If personal information, privacy or other people's business secrets are found, they should be stopped and deleted in time.