Empathy is a driving force in our connection to others, our mental well-being, and our resilience to challenges. The human capacity for empathy extends beyond people; Research shows that we can empathize with characters in literature and even with inanimate or artificial systems. With the rise of generative AI systems that interact with us in daily life, such as ChatGPT, it is important to understand how empathy unfolds toward stories from human vs. AI narrators and how empathy might change when the author of a story is made transparent to users. In this work, we conduct 4 crowd-sourced studies with N=985 participants to understand how and why empathy shifts across human-written vs AI-written stories. To this end, we trained a model on stories annotated with empathic similarity relationships (ie. two narrators empathize with one another), and used this model to retrieve stories users might empathetically resonate with in response to their own personal stories. We compared across conditions where the author of the story is made transparent to users and discuss ethical implications of fostering empathy towards AI as well as the role of deception in this phenomenon. We find that participants consistently empathized less with retrieved AI-written stories whether they know the author is an AI or not, but participants showed significantly greater willingness to empathize with AI when the author of a story is disclosed.