All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrik... See more...
Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrike logs, but I'm hoping someone here can give me some guidance to get started. 
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Jus... See more...
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Just one base query will fetch data from Splunk and then in Grafana I can write additional commands or functions which will be used in each panel on top of the base query, so Splunk load is reduced. Similar to “Post process search” in Splunk. Post Process Searching - How to Optimize Dashboards in Splunk (sp6.io) I followed below instructions and able to fetch data in Splunk but it causes heavy load and stops working next day and all the panels shows “No Data”. Splunk data source | Grafana Enterprise plugins documentation Your help will be greatly Appreciated! Thanks in Advance!
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | for... See more...
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024, 3)]      
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audi... See more...
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audit logs but I haven't seen an entry for it.     Any leads or tips would be appreciated.    Thank you
Hi @Nour.Alghamdi, I found some existing info that if all the keys match, it could be a cert error. Please refer to this article and see if this helps. https://community.appdynamics.com/t5/Knowled... See more...
Hi @Nour.Alghamdi, I found some existing info that if all the keys match, it could be a cert error. Please refer to this article and see if this helps. https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-troubleshoot-EUM-certificate-errors/ta-p/22383
Hi @Nour.Alghamdi, Thanks for letting me know you created a Support ticket. They should be able to take care of you. Can you please share what the solution was as a reply to this post, thanks!
Hi @Surafel.Teferra, Thanks for asking your question on the Community. While we wait for the community to jump in and help, I wanted to offer this AppD Docs page that lists the APIs https://docs.ap... See more...
Hi @Surafel.Teferra, Thanks for asking your question on the Community. While we wait for the community to jump in and help, I wanted to offer this AppD Docs page that lists the APIs https://docs.appdynamics.com/appd/24.x/latest/en/extend-appdynamics
Hello! I wanted to ask what is the best way/configuration to get network device logs directly into splunk? Thanks in advance!
Hello @phanTom , Hope my message finds you well. Are those playbooks by chance in the community repository?
This is exactly what I was looking for!  One interesting thing I noticed, which I am not sure is a bug or not: If you run outputlook up and _time is still in the initial pipeline it will output _ti... See more...
This is exactly what I was looking for!  One interesting thing I noticed, which I am not sure is a bug or not: If you run outputlook up and _time is still in the initial pipeline it will output _time to the lookup This happens even if you explicitly try to remove using the field command   A work around would be to rename time, which works but is not ideal Also to clean this up since this appends to the results of the initial pipeline you will need to follow with a where isnotnull(a), filtering out results on null values that should be present in the appended results. So the resulting search would be something like: ...initial search... ``` If you don't want _time in your resulting lookup ``` | rename _time as time | convert ctime(time) ``` Select fields for outputing to lookup ``` | appendpipe [| fields a, b, c | outputlookup lookup_file] ``` Remove appended entries by filtering on null fields which should only be present in the appended output ``` | where isnotnull(d)  
A snippet from strace output seems to indicate that the 30-40 mins may be taken by the ssl certificate generating steps: <<<snipped>>>>> wait4(9855, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL... See more...
A snippet from strace output seems to indicate that the 30-40 mins may be taken by the ssl certificate generating steps: <<<snipped>>>>> wait4(9855, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 9855        stat("/opt/splunkforwarder/etc/auth/server.pem", 0x7ffdec4c4580) = -1 ENOENT (No such file or directory) clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f143df47e50) = 9857 wait4(9857,                                                                                                                                                                                                                                                                                                         < < <  stuck here for 30-40 mins > > > >   0x7ffdec4c45f4, 0, NULL)    = ? ERESTARTSYS (To be restarted if SA_RESTART is set) --- SIGWINCH {si_signo=SIGWINCH, si_code=SI_KERNEL} --- wait4(9857, New certs have been generated in '/opt/splunkforwarder/etc/auth'. [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 9857 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=9857, si_uid=0, si_status=0, si_utime=11, si_stime=5} --- Strangely, this is only happening on Linux on Azure.  Using openssl, I am able to generate self-sign cert within seconds on the same machine. Our Linux on premises (on vmware) does not experience this performance issue.   Any thoughts on what the issue may be?  How to troubleshoot? Thank you
Hi @kentagous, you can find many interesting videos in the YouTube Splunk Channel (https://www.youtube.com/@Splunkofficial). then you can find many free courses at https://www.splunk.com/en_us/trai... See more...
Hi @kentagous, you can find many interesting videos in the YouTube Splunk Channel (https://www.youtube.com/@Splunkofficial). then you can find many free courses at https://www.splunk.com/en_us/training/free-courses/overview.html At least I hint to follow the Splunk Search Tutorial that helps you to understand how to create a search (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial). About your request, it depends on the data you have (fields). So if you have the src_ip field in your index, you could run something like this: index=your_index sourcetype=your_sourcetype earliest=-30d@d latest=now | stats count BY src_ip Ciao. Giuseppe
Assuming you already have the src_ip field already extracted correctly, you could try something like this | stats count by src_ip
Thanks in advance for the assistance, I am very new to Splunk it is a great tool but I need some assistance.  I am trying to create a filtered report with the following criteria.  - I am filtering ... See more...
Thanks in advance for the assistance, I am very new to Splunk it is a great tool but I need some assistance.  I am trying to create a filtered report with the following criteria.  - I am filtering the data down based on phishing, and now I need to grab each of the individual src_ip and count them.  over a 30 day period.  Unfortunately I do not know have a prelist of IP addresses based on all of the examples.   My goal is to go down the list and count the number of occurrences in this list and show the report on a front panel.  Also, any good books or video training for learning how to do advanced filtering in Splunk.  Thanks 
Mark in your serverclass restartSplunkd to "true" for the app(s) you're deploying
Hi @sahana, your question is reaally vague! The search depends on the data you're speking. e.g., if you're speaking of Windows logins (EventCode=4624) you could use the timechart command, somethin... See more...
Hi @sahana, your question is reaally vague! The search depends on the data you're speking. e.g., if you're speaking of Windows logins (EventCode=4624) you could use the timechart command, something like this: index=wineventlog EventCode=4624 | timechart count Ciao. Giuseppe
I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar c... See more...
I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar chart where y is for login count and x is for time slection (in basis of day interval like 6thfeb  7th feb like this)
Hi @splunkreal , why not? I installed this app in a Search Head Cluster without issues. Ciao. Giuseppe
Hi I have this query too find queries which search span is more than 90d. index=_audit action=search info=completed NOT is_realtime=1 earliest=0 | eval search_et = if(search_et="N/A", 0, search_et... See more...
Hi I have this query too find queries which search span is more than 90d. index=_audit action=search info=completed NOT is_realtime=1 earliest=0 | eval search_et = if(search_et="N/A", 0, search_et) | eval search_lt = if(search_lt="N/A", exec_time, search_lt) | eval srch_window = ((search_lt-search_et)/86400) | eval lookback = case( round(srch_window) <= 1, "-1d", round(srch_window) > 1 AND round(srch_window) <= 7, "1-7d", round(srch_window) > 7 AND round(srch_window) <= 10, "7-10d", round(srch_window) > 10 AND round(srch_window) <= 30, "11-30d", round(srch_window) > 30 AND round(srch_window) <= 60,"30-60d", round(srch_window) > 60 AND round(srch_window) <= 90, "60-90d", 1=1, "+90d" ) | search lookback="+90d" | table user info event_count result_count search | stats count avg(event_count) as avg_event avg(result_count) as avg_results values(info) as info by search, user | sort 0 -count You probably could modify it for your needs? r. Ismo