All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Firstly, what do you mean by move? Secondly, why don't you just send the data to the right index in the first place?
.
Sample logs: Mon Sep 04 13:23:40 2024 -- (eMonitoringLATN_install.sh) Your current directory is [/d/onedemand/etc] Mon Sep 05 12:21:30 2024 -- (eMonitoringLATN_install.sh) Final Destination reached... See more...
Sample logs: Mon Sep 04 13:23:40 2024 -- (eMonitoringLATN_install.sh) Your current directory is [/d/onedemand/etc] Mon Sep 05 12:21:30 2024 -- (eMonitoringLATN_install.sh) Final Destination reached logs. Mon Sep 06 12:21:30 2024 -- (eMonitoringLATN_install.sh) logs ingestion started.   We tried below props for above sample logs but line breaking not happening correctly.  And pls let us know how to give time format as well. SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\w+\s\w+\s\d{2}\s\d{2}:\d{2}:\d{2}\s\d{4} TIME_PREFIX=^    
Salam guys I wrote the Correlation Search Query and added the Adaptive Response Actions (notable, risk analysis and send to soar), but when the event goes to soar there's no event_id in the artifact... See more...
Salam guys I wrote the Correlation Search Query and added the Adaptive Response Actions (notable, risk analysis and send to soar), but when the event goes to soar there's no event_id in the artifacts SPL Query: [ | tstats summariesonly=true values("Authentication.user") as "user",values("Authentication.dest") as "dest",count("Authentication.dest") as "dest_count",count("Authentication.user") as "user_count",count from datamodel="Authentication"."Authentication" where nodename="Authentication.Failed_Authentication" by "Authentication.src","Authentication.app" | rename "Authentication.src" as "src","Authentication.app" as "app" | where 'count'>5 ]
When you're trying to install apps from Splunkbase it asks for your splunk.com user so that you Splunk can pull the app from there. It has nothing to do with your local users.
i just installed splunk and the password/username works fine for loggings but when i try installing add ons it tell me ''Incorrect username or password" is there a second password ??? [solve... See more...
i just installed splunk and the password/username works fine for loggings but when i try installing add ons it tell me ''Incorrect username or password" is there a second password ??? [solved]    thanks to  TheBravoSierra Path Finder ‎09-17-202107:25 AM Possible Fix: Go to splunk.com or splunkbase and login, view profile, and then you can see your username. It is probable that the username is different than you remember. This was the case for me.  note. i couldn't figure out how to delete the comment, so i modified it, hope it's helpful for someone   
Hi guys. I'd like to know if there is a way to schedule an input script to run multiple times also if it does not end with an exit code. I explain: i have some scripts which need to wait some time... See more...
Hi guys. I'd like to know if there is a way to schedule an input script to run multiple times also if it does not end with an exit code. I explain: i have some scripts which need to wait some time before exiting with an output and/or an exit code. At the same time, i need to rerun the same script also if the previous is still running in background. Splunk can't do this, since it monitors the ran script and wait and exit code before running the new one. Example:   [script://./bin/my_script.sh] index=blablabla sourcetye=blablabla source=blablabla interval=60 ...   Let's say "my_script.sh" contains a simple (it's only an example),   #!/bin/bash date sleep 90   Now, with all current methods i used, also running it with [script://./bin/my_script.sh &] Or with a launcher.sh which detaches a child process with a "bash -c 'my_script.sh &' &" or "exec my_script.sh &" the "sleep 90" prevents splunkd to rerun the script every 60 secs, since it needs 90s to exit the previous script sleep. So, in my indexed data, i'll found 2 mins data, as for the 90s sleep, 10:00 splunkd launches "my_script.sh" and waits its exit code to index data 10:01 splunkd tries to launch a new "my_script.sh", but stops it since there's a previous "sleep 90" 10:02 splunkd indexes previous 10:00 data, and reschedule a new "my_script.sh" 10:03 as 10:01 10:04 as 10:02 ... and so on... Is there a way to force a re-run, also if a previous script pid is currently still running, and have data for, 10:00 output from "my_script.sh" 10:01 output from "my_script.sh" 10:02 output from "my_script.sh" 10:03 output from "my_script.sh" ......? Thanks.
Yes. I would understand it this way. The search will be run but the alert will not be triggered for the throttle period. So side efects of the search (outputlookup, sendemail, whatever) will be run o... See more...
Yes. I would understand it this way. The search will be run but the alert will not be triggered for the throttle period. So side efects of the search (outputlookup, sendemail, whatever) will be run on each search run
Just about every Splunk query contains a pipe so I question the accuracy of the post you quoted. My theory (and I can't find any documentation to support it) is the collect command does not return t... See more...
Just about every Splunk query contains a pipe so I question the accuracy of the post you quoted. My theory (and I can't find any documentation to support it) is the collect command does not return the field being throttled so the alert cannot be throttled.  The collect command is used to write results to an index and is used to populate summaries more than it is to trigger alerts.  I suggest having two searches - one to populate testindex and other other to read testindex and trigger the alert.
Thanks Mitesh,   The "failed to create input" was because I already had one the same from previous testing! The rest of the app is still a mystery to me, I have emailed Cisco but no reply   Than... See more...
Thanks Mitesh,   The "failed to create input" was because I already had one the same from previous testing! The rest of the app is still a mystery to me, I have emailed Cisco but no reply   Thanks again FQzy
Hi @FAnalyst , as also @PickleRick said, spl is a tar.gz file tat you can open with a tool like 7zip and then search the savedsearches.conf file, in which you can find the Correlations searches, but... See more...
Hi @FAnalyst , as also @PickleRick said, spl is a tar.gz file tat you can open with a tool like 7zip and then search the savedsearches.conf file, in which you can find the Correlations searches, but what's your requirement? Ciao. Giuseppe
There are license reports mentioned by @gcusello which are quite useful. But they might contain some summarized data. You can search you raw data and calculate it manually index=whatever | stats su... See more...
There are license reports mentioned by @gcusello which are quite useful. But they might contain some summarized data. You can search you raw data and calculate it manually index=whatever | stats sum(eval(len(_raw))) BY host Some caveats though 1. It is slow - it must read all events and check their lengths. You can walk around this problem by creating an indexed field containing event's length. 2. It shows the size of searchable data. Won't show the data you don't have acess to (if you're searching through multiple indexes and have rights to only some of them, won't show you the size of "removed" data if someone used | delete, will be probably limited by role's search filters and so on) 3. Be cautious about _time vs. _indextime
There are so many things which can be wrong here. Did you troubleshoot anything? Connectivity? AD logs? splunkd.log? Do you have any other solutions authenticating with AD? Did you change anything in... See more...
There are so many things which can be wrong here. Did you troubleshoot anything? Connectivity? AD logs? splunkd.log? Do you have any other solutions authenticating with AD? Did you change anything in your environment lately?
If I understand correctly what you're saying, it's not a problem with Splunk but rather with quality of your data. If the same "truth" in reality is expressed as events saying different things in Spl... See more...
If I understand correctly what you're saying, it's not a problem with Splunk but rather with quality of your data. If the same "truth" in reality is expressed as events saying different things in Splunk it means there's something wrong with your sources setup.
Hi @SanjayM  As you said, there are no matches in Splunkbase for Dell Unity Storage.  i see only two possible ideas: You can ask to Dell Support for any roadmaps if they have.  OR Perhaps you ... See more...
Hi @SanjayM  As you said, there are no matches in Splunkbase for Dell Unity Storage.  i see only two possible ideas: You can ask to Dell Support for any roadmaps if they have.  OR Perhaps you can create one for yourself and contribute it to Splunkbase also (may help will be available if you are stuck somewhere) thanks,   Best Regards Sekar
Hi @Splunkers2 May we know if you use the CIM pls.  pls copy paste the Splunk Search query you used(remove sensitive data beofre), thanks
Hi @Mitesh_Gajjar  More details needed pls - The Splunk Version   - the daily license limit - approx how long back the Splunk was installed(to find out how much data is currently stored inside th... See more...
Hi @Mitesh_Gajjar  More details needed pls - The Splunk Version   - the daily license limit - approx how long back the Splunk was installed(to find out how much data is currently stored inside the Splunk) - details about indexer/SHC pls (to understand how many indexers are in Indexer Cluster) - the SF and RF pls  and the most important - the internal splunk logs may i know if you have created a Support ticket to Splunk pls.  Best Regards Sekar
When I create a new input,the prompt  prompts me to enter "User" and "Secret / Password",and it is required. But the value is "xpack.security.enabled: false" in my ElasticSearch.yml Now, I ... See more...
When I create a new input,the prompt  prompts me to enter "User" and "Secret / Password",and it is required. But the value is "xpack.security.enabled: false" in my ElasticSearch.yml Now, I can’t pull data from Elasticsearch to Splunk. How can I fix it?
Hi @araczek  the simple idea is to search existing questions for this. also pls give us more details - Splunk version, when was last working fine, you can login with regular user right.. once logg... See more...
Hi @araczek  the simple idea is to search existing questions for this. also pls give us more details - Splunk version, when was last working fine, you can login with regular user right.. once logged in, pls list down the other users configured, etc   Best Regards Sekar
Hi @FAnalyst by this rest api you can get the table. then you count by the username. let us know if any further queries, thanks.  | rest /servicesNS/-/-/data/ui/views | table author label eai:acl.a... See more...
Hi @FAnalyst by this rest api you can get the table. then you count by the username. let us know if any further queries, thanks.  | rest /servicesNS/-/-/data/ui/views | table author label eai:acl.app | eval Type="Dashboards" | rename author as Owner label as Name eai:acl.app as AppName