All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Nour.Alghamdi, Thanks for letting me know you created a Support ticket. They should be able to take care of you. Can you please share what the solution was as a reply to this post, thanks!
Hi @Surafel.Teferra, Thanks for asking your question on the Community. While we wait for the community to jump in and help, I wanted to offer this AppD Docs page that lists the APIs https://docs.ap... See more...
Hi @Surafel.Teferra, Thanks for asking your question on the Community. While we wait for the community to jump in and help, I wanted to offer this AppD Docs page that lists the APIs https://docs.appdynamics.com/appd/24.x/latest/en/extend-appdynamics
Hello! I wanted to ask what is the best way/configuration to get network device logs directly into splunk? Thanks in advance!
Hello @phanTom , Hope my message finds you well. Are those playbooks by chance in the community repository?
This is exactly what I was looking for!  One interesting thing I noticed, which I am not sure is a bug or not: If you run outputlook up and _time is still in the initial pipeline it will output _ti... See more...
This is exactly what I was looking for!  One interesting thing I noticed, which I am not sure is a bug or not: If you run outputlook up and _time is still in the initial pipeline it will output _time to the lookup This happens even if you explicitly try to remove using the field command   A work around would be to rename time, which works but is not ideal Also to clean this up since this appends to the results of the initial pipeline you will need to follow with a where isnotnull(a), filtering out results on null values that should be present in the appended results. So the resulting search would be something like: ...initial search... ``` If you don't want _time in your resulting lookup ``` | rename _time as time | convert ctime(time) ``` Select fields for outputing to lookup ``` | appendpipe [| fields a, b, c | outputlookup lookup_file] ``` Remove appended entries by filtering on null fields which should only be present in the appended output ``` | where isnotnull(d)  
A snippet from strace output seems to indicate that the 30-40 mins may be taken by the ssl certificate generating steps: <<<snipped>>>>> wait4(9855, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL... See more...
A snippet from strace output seems to indicate that the 30-40 mins may be taken by the ssl certificate generating steps: <<<snipped>>>>> wait4(9855, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 9855        stat("/opt/splunkforwarder/etc/auth/server.pem", 0x7ffdec4c4580) = -1 ENOENT (No such file or directory) clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f143df47e50) = 9857 wait4(9857,                                                                                                                                                                                                                                                                                                         < < <  stuck here for 30-40 mins > > > >   0x7ffdec4c45f4, 0, NULL)    = ? ERESTARTSYS (To be restarted if SA_RESTART is set) --- SIGWINCH {si_signo=SIGWINCH, si_code=SI_KERNEL} --- wait4(9857, New certs have been generated in '/opt/splunkforwarder/etc/auth'. [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 9857 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=9857, si_uid=0, si_status=0, si_utime=11, si_stime=5} --- Strangely, this is only happening on Linux on Azure.  Using openssl, I am able to generate self-sign cert within seconds on the same machine. Our Linux on premises (on vmware) does not experience this performance issue.   Any thoughts on what the issue may be?  How to troubleshoot? Thank you
Hi @kentagous, you can find many interesting videos in the YouTube Splunk Channel (https://www.youtube.com/@Splunkofficial). then you can find many free courses at https://www.splunk.com/en_us/trai... See more...
Hi @kentagous, you can find many interesting videos in the YouTube Splunk Channel (https://www.youtube.com/@Splunkofficial). then you can find many free courses at https://www.splunk.com/en_us/training/free-courses/overview.html At least I hint to follow the Splunk Search Tutorial that helps you to understand how to create a search (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial). About your request, it depends on the data you have (fields). So if you have the src_ip field in your index, you could run something like this: index=your_index sourcetype=your_sourcetype earliest=-30d@d latest=now | stats count BY src_ip Ciao. Giuseppe
Assuming you already have the src_ip field already extracted correctly, you could try something like this | stats count by src_ip
Thanks in advance for the assistance, I am very new to Splunk it is a great tool but I need some assistance.  I am trying to create a filtered report with the following criteria.  - I am filtering ... See more...
Thanks in advance for the assistance, I am very new to Splunk it is a great tool but I need some assistance.  I am trying to create a filtered report with the following criteria.  - I am filtering the data down based on phishing, and now I need to grab each of the individual src_ip and count them.  over a 30 day period.  Unfortunately I do not know have a prelist of IP addresses based on all of the examples.   My goal is to go down the list and count the number of occurrences in this list and show the report on a front panel.  Also, any good books or video training for learning how to do advanced filtering in Splunk.  Thanks 
Mark in your serverclass restartSplunkd to "true" for the app(s) you're deploying
Hi @sahana, your question is reaally vague! The search depends on the data you're speking. e.g., if you're speaking of Windows logins (EventCode=4624) you could use the timechart command, somethin... See more...
Hi @sahana, your question is reaally vague! The search depends on the data you're speking. e.g., if you're speaking of Windows logins (EventCode=4624) you could use the timechart command, something like this: index=wineventlog EventCode=4624 | timechart count Ciao. Giuseppe
I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar c... See more...
I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar chart where y is for login count and x is for time slection (in basis of day interval like 6thfeb  7th feb like this)
Hi @splunkreal , why not? I installed this app in a Search Head Cluster without issues. Ciao. Giuseppe
Hi I have this query too find queries which search span is more than 90d. index=_audit action=search info=completed NOT is_realtime=1 earliest=0 | eval search_et = if(search_et="N/A", 0, search_et... See more...
Hi I have this query too find queries which search span is more than 90d. index=_audit action=search info=completed NOT is_realtime=1 earliest=0 | eval search_et = if(search_et="N/A", 0, search_et) | eval search_lt = if(search_lt="N/A", exec_time, search_lt) | eval srch_window = ((search_lt-search_et)/86400) | eval lookback = case( round(srch_window) <= 1, "-1d", round(srch_window) > 1 AND round(srch_window) <= 7, "1-7d", round(srch_window) > 7 AND round(srch_window) <= 10, "7-10d", round(srch_window) > 10 AND round(srch_window) <= 30, "11-30d", round(srch_window) > 30 AND round(srch_window) <= 60,"30-60d", round(srch_window) > 60 AND round(srch_window) <= 90, "60-90d", 1=1, "+90d" ) | search lookback="+90d" | table user info event_count result_count search | stats count avg(event_count) as avg_event avg(result_count) as avg_results values(info) as info by search, user | sort 0 -count You probably could modify it for your needs? r. Ismo 
Hi probably you could try Splunk Workload management for it. At least it works if users try to run queries without index=xyz. See more https://docs.splunk.com/Documentation/Splunk/latest/Workloads/W... See more...
Hi probably you could try Splunk Workload management for it. At least it works if users try to run queries without index=xyz. See more https://docs.splunk.com/Documentation/Splunk/latest/Workloads/WorkloadRules r. Ismo
Hello, is it possible to install SA-cim_vladiator on clustered search heads? Thanks.  
Is there any efficient way to block queries without the sourcetype? Educating users is not working and we wanted to block it so that there is no degradation of the environment
I am attempting to identify when Splunk users are running searches against historic data (over 180 days old). Additionally, as part of the same request, looking to identify where users have recovered... See more...
I am attempting to identify when Splunk users are running searches against historic data (over 180 days old). Additionally, as part of the same request, looking to identify where users have recovered data from DDAA to DDAS to run searches against that. This is to build a greater understanding of how often historic data is accessed to help guide data retention requirements in Splunk Cloud (i.e. is retention set appropriately or can we extend/reduce retention periods based on the frequency of data access).
Hi Manish, In the dockerfile for the API PSA we just added: RUN npm install httpntlm Regards, Roberto