All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey folks,      Here's a weird one...  I just added a new data source (Windows share permissions) into our Splunk environment, and I'm working on some views to visualize this data for IT staff. ... See more...
Hey folks,      Here's a weird one...  I just added a new data source (Windows share permissions) into our Splunk environment, and I'm working on some views to visualize this data for IT staff.      This isn't rocket surgery - this is pretty simple.  Here's an example event which is created by a PowerShell script that runs every 12 hours on Windows systems:   2022-10-07 09:31:54 DataType="SharePermissions" ShareName="Users" Account="Everyone" Type="Allow" Right="Read"      That's pretty simple.  However, with at least one system, I'm getting crazy data back when I search for it in the Splunk web UI:   `all_windows_index` sourcetype="PowerShell:SMBShares" host=my_hostname_here DataType="SharePermissions" | stats values(SharePath) as SharePath list(Account) as Account list(Type) as Type list(Right) as Right by host ShareName | search ( ShareName="Users" ) | search `filter_no_admin_shares` | rename host as Server   This should display a simple line, with each group or user and the rights they have on this share.  No witchcraft here...  But, when I run the search, in the visualization (a table with zero customizations), I get something like:    (In the above, I intentionally cropped the hostname from the left side of the table's row)    That text doesn't appear anywhere in the event.  The event looks exactly like the example given above, plain text, single words, nothing odd.  And what's even weirder, it's not consistent.  Here are three more refreshes *of exactly the same view*, no changes to inputs, one right after another.  One of them does the right thing.  The other two have more random artifacts:    Between these refreshes, there were no changes in the data.      The text in these artifacts is obviously from Splunk (a lot of it looks like it comes from stuff I see in the job inspector), but it appears nowhere in the event itself, nor in the macros (simple index definitions or filters), nor in the SPL.  For some reason, Splunk is doing this itself, I have no idea why.    I *have* restarted Splunk just to make sure something didn't go sideways on me...  This is Splunk Enterprise v8.2.4, on-prem.  I would LOVE it if someone could explain this behavior.  This is the first time I've seen this with *any* of my data.    Help?    Thanks so much! Chris  
Splunk logs looks like below: userid=234user|rwe23|dwdwd -- userid=id123|34lod|2323 textHow can I get value between "=" and first "|" I want to get table of value between "=" and first "|", like "... See more...
Splunk logs looks like below: userid=234user|rwe23|dwdwd -- userid=id123|34lod|2323 textHow can I get value between "=" and first "|" I want to get table of value between "=" and first "|", like "234user" and "id123" I tried: index=indexhere "userid=" |regex "(?<==)(?<info>.+?)(?=\|)" | dedup info | table info this one works fine in regex101, but shows 0 result in Splunk. Could anyone please help? Any help would be appreciated. Thanks!
Is it possible to restrict the "splunk enable listen" command so that it only listens to certain IP addresses? Or better yet, uses an API?
I am looking to monitor performance metrics of NAS devices in Splunk, I came across this APP but seems like it has reached end of life. Have Splunk replaced this product with anything new ? Do we hav... See more...
I am looking to monitor performance metrics of NAS devices in Splunk, I came across this APP but seems like it has reached end of life. Have Splunk replaced this product with anything new ? Do we have any alternative? https://docs.splunk.com/Documentation/NetApp/2.1.91/DeployNetapp/AbouttheSplunkAppforNetAppDataONTAP
If i only want to use the field "_time" of a log to get first and latest occurrence of an event, which commands should i use and why ?  ex: ... | stats earliest(_time) as firsttime latest(_time)... See more...
If i only want to use the field "_time" of a log to get first and latest occurrence of an event, which commands should i use and why ?  ex: ... | stats earliest(_time) as firsttime latest(_time) as lasttime  ... or ...  | stats min(_time) as firsttime max(_time) as lasttime ...   Is there a case where i could get differents results ?
Good Morning all ,  I have a standalone splunk installation , there is no syslog data being transmitted and Im really not getting any data collected from the universal forwarded it is phoning home ... See more...
Good Morning all ,  I have a standalone splunk installation , there is no syslog data being transmitted and Im really not getting any data collected from the universal forwarded it is phoning home however i dont see any data from the linux server that it is installed on. I dont see any log modifications or anything . am i mis understanding the UF
I have a search that gathers a bunch of data from various sources and appends to 1 big stats that I have reporting in a customized column order. After I weed out some things I don't like, it looks p... See more...
I have a search that gathers a bunch of data from various sources and appends to 1 big stats that I have reporting in a customized column order. After I weed out some things I don't like, it looks perfect in search, so I appended a: | outputlookup file.csv to the very bottom so it'd write to a reusable csv. When I look at the dataset/csv it is rearranging my columns into an alphabetical order (caps first). Is there any way to keep my order in the csv so when I reference it later in an inputlookup I don't need to manually reorder it everytime?  
Hello All, I have a file that is created/appended via a bash script (varialbe >> file.txt) It puts the newest data at the bottom (plan to change that and use python to write file) The file is m... See more...
Hello All, I have a file that is created/appended via a bash script (varialbe >> file.txt) It puts the newest data at the bottom (plan to change that and use python to write file) The file is monitored by the Universal Forwarder - works fine. BUT when data gets into splunk I have duplicate date values - the events are a UP or DOWN by hostname with a time recorded for DOWN, and a time recorded for UP. The search returns duplicate date/time values for each event.. All data goes into the "main" index. The file has no duplicate time/date values. Could this be a problem with the sourcetype? should I use  a separate index for the data being monitored? the file is just a simple text file with date/time, host, and status (UP or DOWN) Any suggestions? Thanks so much, eholz1
Hello all, I would like a single splunk query that does the following: Query "APP_A" for a specific log message, returning two values (key, timestamp) Query "APP_B" for a specific log message, re... See more...
Hello all, I would like a single splunk query that does the following: Query "APP_A" for a specific log message, returning two values (key, timestamp) Query "APP_B" for a specific log message, returning two values (key, timestamp) Data takes roughly five min to process from APP_A to APP_B.  So, to ensure I am getting the most accurate view of the data as possible, I want to offset the queries by 600 seconds.  This likely means configuring query one to look back five min Produce a table / report that lists ONLY the keys that are distinct to each table EX: QUERY 1 RESULTS a 1665155553 b 1665155554 c 1665155555 d 1665155556   QUERY 2 RESULTS a 1665155853 c 1665155854 d 1665155855 e 1665155856   OVERY ALL RESULTS (what I really want) b 1665155554 e 1665155856   For better or worse, here is what I have so far... | set diff [search index="<REDACTED>" cf_org_name="<REDACTED>" cf_app_name="<REDACTED>" event_type="LogMessage" "msg.logger_name"="<REDACTED>" | rex field="msg.message" "<REDACTED>" | table masterKey timestamp | ] [search index="<REDACTED>" cf_org_name="<REDACTED>" cf_app_name="<REDACTED>" event_type="LogMessage" "msg.logger_name"="<REDACTED>" | table masterKey timestamp | ] My syntax is for sure off, because the diff is not producing distinct results.  Also, I haven't tried to tackle the time off set problem yet.  Any help would be greatly appreciated.  Thanks in advanced.    
I have a search that leverages a kvstore lookup that takes the src IP and then checks the lookup to see what core, content, and zone the IP is associated with:   | lookup zone_lookup cidr_range as ... See more...
I have a search that leverages a kvstore lookup that takes the src IP and then checks the lookup to see what core, content, and zone the IP is associated with:   | lookup zone_lookup cidr_range as src | fillnull value=NULL | search context!="" core!="" zone!="" | eval core=coalesce(core,"null") | eval context=coalesce(context,"null") | eval zone=coalesce(zone,"null")   Unfortunately, we do not have a ROA for this info so we have populated the kvstore lookup from various sources as best we can, but sometimes we'll see src IPs with no zone listed. I do have a table I keep that allows me to fill in those blanks and it's a simple table as follows:   cidr_range zone x.x.x.x/16 zone1 y.y.y.y/24 zone2 z.z.z.z/24 zone3   I'd like to create a search that appends my lookup with this data - how would I write that search? Thx
Hi Splunkers, There is one field is common in 2 indexes. Using that field how can i co-relate and make a table out of it without using JOIN, Append & Appendpipe command ? Because those command will ... See more...
Hi Splunkers, There is one field is common in 2 indexes. Using that field how can i co-relate and make a table out of it without using JOIN, Append & Appendpipe command ? Because those command will take a lot of time and  Please refer to the below pictures   Thanks & regards
Hi, i have to scale down my search head cluster to a standalone one but there is no documentation anywhere, is it possible ?,what steps should i perform ?
Hi everyone, i have a splunk universal forwarder installed in linux machine and configured some log files to forward to indexer. But i am getting below error and data not getting ingested in splunk... See more...
Hi everyone, i have a splunk universal forwarder installed in linux machine and configured some log files to forward to indexer. But i am getting below error and data not getting ingested in splunk. input type: File Error: 0100 ERROR Metrics - Metric with name thruput:thruput already registered 0100 ERROR Metrics - Metric with name thruput:idxSummary already registered  
Hi,   I have implements the Splunk Add-on for Microsoft Cloud Services and while I can get data in the filed names are very difficult to make use of as they are appended with body.fieldname, any ... See more...
Hi,   I have implements the Splunk Add-on for Microsoft Cloud Services and while I can get data in the filed names are very difficult to make use of as they are appended with body.fieldname, any ideas on how to make this more usable?  
I have changed the permissions of ownership chown -R  root:root/opt/splunkforwarder After that, I started Splunk as root user, but after that was finished, the owner:group reverted back to splunk... See more...
I have changed the permissions of ownership chown -R  root:root/opt/splunkforwarder After that, I started Splunk as root user, but after that was finished, the owner:group reverted back to splunk:splunk, respectively. The same situation persists even after restarting Splunk and restarting the OS. Why its revert back to splunk:splunk Wanted to operate with root:root as the owner:group under /opt/splunkforwarder. https://docs.splunk.com/Documentation/Splunk/9.0.1/ReleaseNotes/KnownIssues#Universal_forwarder_issues  
Hi    I wanted to get the details  of the top 5 indexes consuming high license seperated by date  for last 7 days in a single query. 16th -top 5 index --gb 17th -top 5 index --gb 18th top 5... See more...
Hi    I wanted to get the details  of the top 5 indexes consuming high license seperated by date  for last 7 days in a single query. 16th -top 5 index --gb 17th -top 5 index --gb 18th top 5 index  --gb  ......... Please help me with the above query   
Is there a method of tracking a service ceiling over the long term?  I have daily transaction that are being summarized over suitable interval and written to a summary index.   I wish to keep a maxim... See more...
Is there a method of tracking a service ceiling over the long term?  I have daily transaction that are being summarized over suitable interval and written to a summary index.   I wish to keep a maximum of the transaction fields (count, success, by category, etc) for hourly, daily intervals, and have the maximum of the maximum or peak(maximum) for each of those transaction fields representing the service ceiling or maximum observed values for those fields.  The maximum observed value will be later used to calculate a utilization of the service. I am kind of thinking that the answer is probably a daily report that consumes the summary index, calculates the daily maximum observed values, then write the daily maximums to a summary_index stash. I am having trouble approaching the problem and am looking for ideas and/or guidance.  Currently I am playing with streamstats and a window.   | search ... | bin span=600s _time | streamstats window=1 current=f sum(successful) AS previous_successful_transactions | streamstats sum(successful) as successful_transactions | fillnull value=0 previous_successful_transactions successful_transactions, peak_transactions | eval peak_transactions=if(successful_transactions>previous_successful_transactions, successful_transactions, peak_transactions) | chart max(previous_successful_transactions) as previous_successful_transactions max(peak_transactions) as peak_transactions by _time    
Hi I have a basic question about the append limit which is 50000 events max Does it means that only the 50000 first events sorted by timestamp are displayed (from newest to oldest)? And in some... See more...
Hi I have a basic question about the append limit which is 50000 events max Does it means that only the 50000 first events sorted by timestamp are displayed (from newest to oldest)? And in some discussions, it seems that these limit could be overrided with       | sort 0       https://community.splunk.com/t5/Splunk-Search/Using-sort-0-to-avoid-10000-row-limit/m-p/502707 is it true or the only way to change the limit is to modify limits.conf? thanks  
Hi, i don't know where is the problem. The search it's: | rex '(?<field>H.+)\\' | table field I want to use regular express. for parsing field and i want to show in output the result.  The fiel... See more...
Hi, i don't know where is the problem. The search it's: | rex '(?<field>H.+)\\' | table field I want to use regular express. for parsing field and i want to show in output the result.  The field got a path like \Pc\Hardware\Nice\ok . I want to get all of the words after Pc\.  I don't know how to solve, he literaly does nothing and return the same origin data. Thank u
Hello Splunkers, Is there a way to identify/search what SMB version is being used across the network? I am looking to detect SMBv1 specifically to use it as a source for disabling SMBv1 throughout ... See more...
Hello Splunkers, Is there a way to identify/search what SMB version is being used across the network? I am looking to detect SMBv1 specifically to use it as a source for disabling SMBv1 throughout the network. Regards