All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How would I modify this search : | tstats prestats=true summariesonly=true allow_old_summaries=true count from datamodel=Authentication.Authentication where Authentication.app=win* Authentication.ac... See more...
How would I modify this search : | tstats prestats=true summariesonly=true allow_old_summaries=true count from datamodel=Authentication.Authentication where Authentication.app=win* Authentication.action=$action$ by _time, Authentication.user span=10m | timechart minspan=10m useother=true count by Authentication.user limit=50 to exclude this IP Address: src_ip!="10.0.1.90"
How can I simultaneously read from a file by UF and send to the indexer in Splunk format and to another device in syslog format?    
I have forwarding the logs from the below directory. Below is the inputs.conf file [monitor:///u01/app/oracle/scripts/SplunkMonitoring/Log] disabled = false index = osb crcSalt = <SOURCE> [mon... See more...
I have forwarding the logs from the below directory. Below is the inputs.conf file [monitor:///u01/app/oracle/scripts/SplunkMonitoring/Log] disabled = false index = osb crcSalt = <SOURCE> [monitor:///u01/app/oracle/scripts/Logging/output] disabled = false index = osb =============   output  from the /u01/app/oracle/scripts/Logging/output is forwarding successfully but no logs were received for /u01/app/oracle/scripts/SplunkMonitoring/Log. Below is the splunkd.log file. 10-09-2022 21:31:41.925 +0800 INFO WatchedFile [214996 tailreader0] - Will begin reading at offset=0 for file='/u01/app/oracle/scripts/SplunkMonitoring/Log/ServerStatus.txt'. 10-09-2022 21:32:01.208 +0800 INFO AutoLoadBalancedConnectionStrategy [214989 TcpOutEloop] - Connected to idx=10.9.0.49:9997:0, pset=0, reuse=0. autoBatch=1 10-09-2022 21:32:05.125 +0800 INFO TailReader [214996 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 10-09-2022 21:32:31.053 +0800 INFO AutoLoadBalancedConnectionStrategy [214989 TcpOutEloop] - Connected to idx=10.9.0.49:9997:2, pset=0, reuse=0. autoBatch=1 10-09-2022 21:32:35.054 +0800 INFO TailReader [214996 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 10-09-2022 21:33:00.976 +0800 INFO AutoLoadBalancedConnectionStrategy [214989 TcpOutEloop] - Connected to idx=10.9.0.49:9997:1, pset=0, reuse=0. autoBatch=1 10-09-2022 21:33:07.976 +0800 INFO TailReader [214996 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 10-09-2022 21:33:21.924 +0800 INFO WatchedFile [214996 tailreader0] - Will begin reading at offset=0 for file='/u01/app/oracle/scripts/SplunkMonitoring/Log/jms_status.txt'. 10-09-2022 21:33:21.931 +0800 INFO WatchedFile [214996 tailreader0] - Will begin reading at offset=0 for file='/u01/app/oracle/scripts/SplunkMonitoring/Log/DataSourceStatus.txt'.
Hi support team, I have just registered an account, I have received an email for resetting the password. But I have not received the email regarding Controller information. Please help me check m... See more...
Hi support team, I have just registered an account, I have received an email for resetting the password. But I have not received the email regarding Controller information. Please help me check my account [email redacted] Thanks & Regards. ^ Post edited by @Ryan.Paredez to remove the email address. For privacy reasons, please do not share your email address on community posts. If you need to share it, please do so via Community Private Message. 
In many Splunk official Documentation we read sometimes, to "wipe" an instance, to launch the command   splunk clean all   OK. But, doing so, we reset also the passwd file, so from now on we ... See more...
In many Splunk official Documentation we read sometimes, to "wipe" an instance, to launch the command   splunk clean all   OK. But, doing so, we reset also the passwd file, so from now on we have no more access to Splunk instance, unless we did previously a backup restoring it after the "clean all" did its job. So, considering this aspect, this type of documentation seems very dangerous to me, without specifying this case. An example: i need to remove completely an instance from a SH Cluster 1) i follow the "clean all" Documentation, and i come in the case of an unuseful intsance 2) i follow, by myself, a "clean kvstore --cluster" or "clean kvstore --all", and the instance was there still running and operative, without the cluster db data registered So, do a "splunk clean all" should make by itself a backup of auth data for logging into the instance or reassign the original "changeme"?
Hello,   We have a huge setup and UFs are managed through Deployment server, All the UFs are at far places and managed and installed by their respective asset owners. lately some random clients s... See more...
Hello,   We have a huge setup and UFs are managed through Deployment server, All the UFs are at far places and managed and installed by their respective asset owners. lately some random clients stops phoning home to their client. Though they are disconnected they are sending logs which explains they are not stopped or any configuration changes happened. kindly help me out understanding what is the issue and how it can be handled.   Thank you.
Hello Splunk Experts, We had issue where several network devices not ingesting into SPlunk. Further checking with Splunk found that the logs from the router is making it to the rsyslog server but i... See more...
Hello Splunk Experts, We had issue where several network devices not ingesting into SPlunk. Further checking with Splunk found that the logs from the router is making it to the rsyslog server but it is NOT writing to disk. Need to check the rsyslog config to see why the data is not writing to disk.  Any suggestion how to check on this? Any example of rsyslog.conf that we can refer?  Thank in advance  
I'm working on a Splunk CSC and I've found it really helpful to output logs to the search log with:   print('Whatever I want', file=sys.stderr)   Which appears in the search log as: 10-07-2022 2... See more...
I'm working on a Splunk CSC and I've found it really helpful to output logs to the search log with:   print('Whatever I want', file=sys.stderr)   Which appears in the search log as: 10-07-2022 23:47:19.915 ERROR ChunkedExternProcessor [10374 ChunkedExternProcessorStderrLogger] - stderr: Whatever I want  But that's a very beefy (even misleading) preamble. So my question is: Can I control the output that gets displayed in the search log? I'm assuming there's some file handle somewhere I can write to and I would love to get a hold of it! It's obviously not sys.stdout because that's what the actual event data gets transmitted to. Thanks!
Did Splunk Inc just get rid of Maxmind's free iplocation database and replace it with a different free product (dbip-city-lite.mmdb)? Am I the only one who thinks the accuracy of the IP lookups has... See more...
Did Splunk Inc just get rid of Maxmind's free iplocation database and replace it with a different free product (dbip-city-lite.mmdb)? Am I the only one who thinks the accuracy of the IP lookups has gotten much worse with dbip-city-lite? If this new database is to be believed, all my IP6 connections are coming from Texas, New York or Illianois and I work in Chicago (I don't). Maxmind (free version that came with Splunk) was not 100%, but seemed to be much more accurate throughout the years I used it. Anyone agree or disagree?
Hey folks,      Here's a weird one...  I just added a new data source (Windows share permissions) into our Splunk environment, and I'm working on some views to visualize this data for IT staff. ... See more...
Hey folks,      Here's a weird one...  I just added a new data source (Windows share permissions) into our Splunk environment, and I'm working on some views to visualize this data for IT staff.      This isn't rocket surgery - this is pretty simple.  Here's an example event which is created by a PowerShell script that runs every 12 hours on Windows systems:   2022-10-07 09:31:54 DataType="SharePermissions" ShareName="Users" Account="Everyone" Type="Allow" Right="Read"      That's pretty simple.  However, with at least one system, I'm getting crazy data back when I search for it in the Splunk web UI:   `all_windows_index` sourcetype="PowerShell:SMBShares" host=my_hostname_here DataType="SharePermissions" | stats values(SharePath) as SharePath list(Account) as Account list(Type) as Type list(Right) as Right by host ShareName | search ( ShareName="Users" ) | search `filter_no_admin_shares` | rename host as Server   This should display a simple line, with each group or user and the rights they have on this share.  No witchcraft here...  But, when I run the search, in the visualization (a table with zero customizations), I get something like:    (In the above, I intentionally cropped the hostname from the left side of the table's row)    That text doesn't appear anywhere in the event.  The event looks exactly like the example given above, plain text, single words, nothing odd.  And what's even weirder, it's not consistent.  Here are three more refreshes *of exactly the same view*, no changes to inputs, one right after another.  One of them does the right thing.  The other two have more random artifacts:    Between these refreshes, there were no changes in the data.      The text in these artifacts is obviously from Splunk (a lot of it looks like it comes from stuff I see in the job inspector), but it appears nowhere in the event itself, nor in the macros (simple index definitions or filters), nor in the SPL.  For some reason, Splunk is doing this itself, I have no idea why.    I *have* restarted Splunk just to make sure something didn't go sideways on me...  This is Splunk Enterprise v8.2.4, on-prem.  I would LOVE it if someone could explain this behavior.  This is the first time I've seen this with *any* of my data.    Help?    Thanks so much! Chris  
Splunk logs looks like below: userid=234user|rwe23|dwdwd -- userid=id123|34lod|2323 textHow can I get value between "=" and first "|" I want to get table of value between "=" and first "|", like "... See more...
Splunk logs looks like below: userid=234user|rwe23|dwdwd -- userid=id123|34lod|2323 textHow can I get value between "=" and first "|" I want to get table of value between "=" and first "|", like "234user" and "id123" I tried: index=indexhere "userid=" |regex "(?<==)(?<info>.+?)(?=\|)" | dedup info | table info this one works fine in regex101, but shows 0 result in Splunk. Could anyone please help? Any help would be appreciated. Thanks!
Is it possible to restrict the "splunk enable listen" command so that it only listens to certain IP addresses? Or better yet, uses an API?
I am looking to monitor performance metrics of NAS devices in Splunk, I came across this APP but seems like it has reached end of life. Have Splunk replaced this product with anything new ? Do we hav... See more...
I am looking to monitor performance metrics of NAS devices in Splunk, I came across this APP but seems like it has reached end of life. Have Splunk replaced this product with anything new ? Do we have any alternative? https://docs.splunk.com/Documentation/NetApp/2.1.91/DeployNetapp/AbouttheSplunkAppforNetAppDataONTAP
If i only want to use the field "_time" of a log to get first and latest occurrence of an event, which commands should i use and why ?  ex: ... | stats earliest(_time) as firsttime latest(_time)... See more...
If i only want to use the field "_time" of a log to get first and latest occurrence of an event, which commands should i use and why ?  ex: ... | stats earliest(_time) as firsttime latest(_time) as lasttime  ... or ...  | stats min(_time) as firsttime max(_time) as lasttime ...   Is there a case where i could get differents results ?
Good Morning all ,  I have a standalone splunk installation , there is no syslog data being transmitted and Im really not getting any data collected from the universal forwarded it is phoning home ... See more...
Good Morning all ,  I have a standalone splunk installation , there is no syslog data being transmitted and Im really not getting any data collected from the universal forwarded it is phoning home however i dont see any data from the linux server that it is installed on. I dont see any log modifications or anything . am i mis understanding the UF
I have a search that gathers a bunch of data from various sources and appends to 1 big stats that I have reporting in a customized column order. After I weed out some things I don't like, it looks p... See more...
I have a search that gathers a bunch of data from various sources and appends to 1 big stats that I have reporting in a customized column order. After I weed out some things I don't like, it looks perfect in search, so I appended a: | outputlookup file.csv to the very bottom so it'd write to a reusable csv. When I look at the dataset/csv it is rearranging my columns into an alphabetical order (caps first). Is there any way to keep my order in the csv so when I reference it later in an inputlookup I don't need to manually reorder it everytime?  
Hello All, I have a file that is created/appended via a bash script (varialbe >> file.txt) It puts the newest data at the bottom (plan to change that and use python to write file) The file is m... See more...
Hello All, I have a file that is created/appended via a bash script (varialbe >> file.txt) It puts the newest data at the bottom (plan to change that and use python to write file) The file is monitored by the Universal Forwarder - works fine. BUT when data gets into splunk I have duplicate date values - the events are a UP or DOWN by hostname with a time recorded for DOWN, and a time recorded for UP. The search returns duplicate date/time values for each event.. All data goes into the "main" index. The file has no duplicate time/date values. Could this be a problem with the sourcetype? should I use  a separate index for the data being monitored? the file is just a simple text file with date/time, host, and status (UP or DOWN) Any suggestions? Thanks so much, eholz1
Hello all, I would like a single splunk query that does the following: Query "APP_A" for a specific log message, returning two values (key, timestamp) Query "APP_B" for a specific log message, re... See more...
Hello all, I would like a single splunk query that does the following: Query "APP_A" for a specific log message, returning two values (key, timestamp) Query "APP_B" for a specific log message, returning two values (key, timestamp) Data takes roughly five min to process from APP_A to APP_B.  So, to ensure I am getting the most accurate view of the data as possible, I want to offset the queries by 600 seconds.  This likely means configuring query one to look back five min Produce a table / report that lists ONLY the keys that are distinct to each table EX: QUERY 1 RESULTS a 1665155553 b 1665155554 c 1665155555 d 1665155556   QUERY 2 RESULTS a 1665155853 c 1665155854 d 1665155855 e 1665155856   OVERY ALL RESULTS (what I really want) b 1665155554 e 1665155856   For better or worse, here is what I have so far... | set diff [search index="<REDACTED>" cf_org_name="<REDACTED>" cf_app_name="<REDACTED>" event_type="LogMessage" "msg.logger_name"="<REDACTED>" | rex field="msg.message" "<REDACTED>" | table masterKey timestamp | ] [search index="<REDACTED>" cf_org_name="<REDACTED>" cf_app_name="<REDACTED>" event_type="LogMessage" "msg.logger_name"="<REDACTED>" | table masterKey timestamp | ] My syntax is for sure off, because the diff is not producing distinct results.  Also, I haven't tried to tackle the time off set problem yet.  Any help would be greatly appreciated.  Thanks in advanced.