All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a CSV that looks like the following:  Organization System  Scan Due  Date   ABC Jack 7-Feb-21   ABC Jill 9-May-20   123 Bob Unspecified   123 Alice Unspecifie... See more...
I have a CSV that looks like the following:  Organization System  Scan Due  Date   ABC Jack 7-Feb-21   ABC Jill 9-May-20   123 Bob Unspecified   123 Alice Unspecified   456 James 10-Jan-21   How do I do a count of the " Scan Due Date Field" that shows all of the events that are Overdue, Not Expired, and Unspecified? ( I want to eventually want to put the results of that search into a pie chart.  Any help is appreciated! 
Hello, I'm trying to exclude the results that I obtain from this search. Essentially, this yields all bots hitting my web server. Now I'm trying to exclude this results so it shows me, hopefully, al... See more...
Hello, I'm trying to exclude the results that I obtain from this search. Essentially, this yields all bots hitting my web server. Now I'm trying to exclude this results so it shows me, hopefully, all human hits. I've been trying to do | stats count (eval(NOT match . . . , but I cannot for the life of me figure it out. Thanks! sourcetype="access_combined" host="www7*" | eval usersession=clientip + "_" + useragent | sort usersession, _time | delta _time as visit_pause p=1 | streamstats current=f window=1 global=f last(usersession) as previous_usersession | eval visit_pause=if(usersession==previous_usersession, visit_pause, -1) | search visit_pause!=-1
I've configured three bash scripts, all of which do essentially the same exact thing. 1. Run a command and send the output to a file 2. Parse this file via a Python script (Which then prints the pa... See more...
I've configured three bash scripts, all of which do essentially the same exact thing. 1. Run a command and send the output to a file 2. Parse this file via a Python script (Which then prints the parsed file to the console, sending it to Splunk) 3. Delete the file Two of the three scripts work exactly as intended, with no issues whatsoever. The third, however, sends no information to Splunk at all. I can run the python portion independently, the bash script w/ the python script, and both can, additionally, be done through'/opt/splunkforwarder/bin/splunk cmd', and, in all instances, it provides the desired output to the cmd line without it being ingested. If I alter the script and purposefully put in errors they will appear in the _internal index. Aside from that, nothing related to issues with the script appears there. All of the scripts are set up in the exact same manner in the inputs.conf file, with them all using the same source-type and index.  Any guidance on how to proceed with the troubleshooting would be greatly appreciated!  
I have my search that currently returns: name | HH:MM:SS | Val in Seconds Now I can get my single val stat to display the HH:MM:SS but that doesn't help me when it comes to wanting to range/color... See more...
I have my search that currently returns: name | HH:MM:SS | Val in Seconds Now I can get my single val stat to display the HH:MM:SS but that doesn't help me when it comes to wanting to range/color the values - is there a way to have the colorize use the val in seconds while the actual single val stat uses the HH:MM:SS to display??
Splunk 8.0.4.1 on Windows 2016 Using a Heavy Forwarder to index syslog data, multiple ports with a sourcetype pr. port. All ports should be forwarded to our default indexer, but in addtion, pr. sour... See more...
Splunk 8.0.4.1 on Windows 2016 Using a Heavy Forwarder to index syslog data, multiple ports with a sourcetype pr. port. All ports should be forwarded to our default indexer, but in addtion, pr. sourctype/port the data should be forwarded to an additional indexer (to our security operations center).  I have tried similar to Define typical forwarder deployment topologies but so far I have only been able to disable all forwarding, which is really not what I want Is it possible to clone data defined pr. sourcetype to two IDX's? UPDATE I think I found the doc that I need : Perform selective indexing and forwarding need to read a bit more about _INDEX_AND_FORWARD_ROUTING ....I hope....    
Hi! I have a drop-down menu that has 3 static options, "Server," "Non-Server," and "All."    How do I hide a panel if a user selects "Server" or "Non-Server"?  
Need Help in converting the Columns to  single rows depending on the primary key column values .  I have a  data show in below  with 3 columns  DocID  | DocType        |  DocProperty 123      ... See more...
Need Help in converting the Columns to  single rows depending on the primary key column values .  I have a  data show in below  with 3 columns  DocID  | DocType        |  DocProperty 123       | soft Copy       |   xy 123       | Hard Copy     | zx 124       |   Softcopy      |xy  I need result as shown below  DocID  | DocType 1   | DocType 2  |  DocProperty1  | DocProperty1 123       | soft Copy     | Hard Copy   | xy                            | zx 124       |   Softcopy    |xy  Note :I have tried  different ways but no luck all i am getting is  DocID  | DocType 1   | DocType 2  |  DocProperty1  | DocProperty1 123       | soft Copy       | Empty cell  | xy                          | Emptycell 123       | Empty cell     | Hard Copy   | Emptycell           | zx 124       |   Softcopy    |xy related records should be In one line without empty cell . Thanks !
Anyone know how to configure the Cisco Umbrella Add-on to also send the Umbrella logs to a syslog server?  I've tried the info here (https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Fo... See more...
Anyone know how to configure the Cisco Umbrella Add-on to also send the Umbrella logs to a syslog server?  I've tried the info here (https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd#Syslog_data) but I seem to get all data coming into my splunk system, not just the Umbrella logs. I'm wondering if there's a way to make it work for only the Umbrella data. Thanks!
Hello, I have a Search head cluster and an indexer cluster.  When I am on one of the searchheads and run this ldapsearch command I get results. It works perfectly.    | ldapsearch search="(&(obje... See more...
Hello, I have a Search head cluster and an indexer cluster.  When I am on one of the searchheads and run this ldapsearch command I get results. It works perfectly.    | ldapsearch search="(&(objectCategory=Person)(objectClass=User)(lockoutTime>=1))" domain="MYDOMAIN.COM" basedn="OU=Users,OU=NYHQ,OU=US,DC=MYDOMAIN,DC=com"   However, all the indexers throw this spurious error, that doesn't seem to impact the results.    [indexer1.mydomain.com] External search command 'ldapsearch' returned error code 1. Script output = " ERROR "KeyError at ""/opt/splunk/var/run/searchpeers/B8AB8EAB-1DD4-42C8-83DE-945995C604D4-1592589919/apps/SA-ldapsearch/bin/packages/splunklib/client.py"", line 1653 : u'ldap'" "   When I login directly to my indexers and execute the same ldap search locally, I don't receive any errors.  SA-ldapsearch is configured on both indexers and searchheads. Each one has valid ldap.conf and passwords.conf  and present in $SPLUNK_HOME$etc/apps/SA-ldapsearch  . I am able to AD authenticate on all of the machines.  Any idea why I am getting these spurious errors thrown on the searchheads but not the indexers? Thanks!
I am receiving the logs from the forwarders and can see latency between index time and event time. We have difference between index time and event time is about 15 to 16 hours on more than 300 forwar... See more...
I am receiving the logs from the forwarders and can see latency between index time and event time. We have difference between index time and event time is about 15 to 16 hours on more than 300 forwarders. How can  i fix this issue?
Hi, I want to count two different stats and join them in the same resulting table. Can you remind me how to do this? The string is to complex to duplicate, but I'll illustrate what is there.     ... See more...
Hi, I want to count two different stats and join them in the same resulting table. Can you remind me how to do this? The string is to complex to duplicate, but I'll illustrate what is there.       index=server application=app page=x OR page=y OR page=z |stats count as pageloads by page |stats distinctcount(id) as usercount by page |lookup excel.csv page AS pagetitle OUTPUTNEW author,creationdate,lastmodified |table pageloads,usercount,pagetitle,author,creationdate,lastmodified      
Just trying to find way to get src or dst info for matching signature group by values   | tstats allow_old_summaries=true count from datamodel=Intrusion_Detection by IDS_Attacks.signature | `drop_d... See more...
Just trying to find way to get src or dst info for matching signature group by values   | tstats allow_old_summaries=true count from datamodel=Intrusion_Detection by IDS_Attacks.signature | `drop_dm_object_name("IDS_Attacks")` | search [|inputlookup org_applicationattack.csv| fields signature ] | xswhere count from count_by_signature_1h in ids_attacks by signature is above minimal | where count >50 | fields signature count source ids_type vendor action
I have my Splunk enterprise instance set up on a windows server. I also have 4 universal forwarders set up on Windows servers and 4 more universal forwarders set up on Linux servers. Right now, I wan... See more...
I have my Splunk enterprise instance set up on a windows server. I also have 4 universal forwarders set up on Windows servers and 4 more universal forwarders set up on Linux servers. Right now, I want to forward PerfMon stats from the 4 Windows servers. When I was setting up the forwarders on these servers, I checked the boxes in the forwarder installers that allowed certain PerfMon counters to be sent to the Splunk server. Those are successfully sending certain counters like "Available Bytes", "Bytes Received/sec", etc. My question is how can I add more of these PerfMon counters to be forwarded to my Splunk Server? I saw some information about the Splunk App for Windows, but that was using Deployment clients and things like that, and I was wondering if their was a simpler way of just adding specific PerfMon counters in the forwarder settings on each server.  Would it be better to use the Splunk App for Windows instead?
I'm dealing with a set of web servers with an inconsistent access logging configuration. There is some variability in the path and the name of the files: Source /usr/local2/searchapps/v-ESP_ssl/l... See more...
I'm dealing with a set of web servers with an inconsistent access logging configuration. There is some variability in the path and the name of the files: Source /usr/local2/searchapps/v-ESP_ssl/logs/access.log /usr/local2/searchapps/v-admin11/conf/ssl/logs/access_log /usr/local2/searchapps/v-admin11/logs/access.log There are additional file patterns in these same directory paths that I don't want to index: <PATH>/access.log-2020-06-13-1592036161 <PATH>/access.log-2020-06-13-1592036161.gz It seems I can pull everything in correctly with two [monitor] stanzas: [monitor:///usr/local2/searchapps/v-*/.../logs/access*] index = web sourcetype = access_combined crcSalt = <SOURCE> whitelist = access[\._]log$ [monitor:///usr/local2/searchapps/v-*/logs/access*] index = web sourcetype = access_combined crcSalt = <SOURCE> whitelist = access[\._]log$ For my own education, I'm trying to simplify the configuration by collapsing them into a single stanza and getting rid of the whitelist. So far I've been unsuccessful, it seems mostly due to the behavior of the ... and * wildcards. Is there a way to combine those two inputs into one and further simplify?
All, I am having some authentication issue.  If I run Splunk command in the Command Prompt, I was able to logon as admin.  However, when I tried to logon as admin through the web UI, it failed to au... See more...
All, I am having some authentication issue.  If I run Splunk command in the Command Prompt, I was able to logon as admin.  However, when I tried to logon as admin through the web UI, it failed to authenticate.  Also, I am not able to logon using my AD account neither.   I tried resetting admin password and new password worked in Command Prompt, but not Web UI. When I looked at the splunkd.log file, I noticed that it has always tried to forward the username (even admin) to LDAP server and then failed saying invalid username.  I haven't changed LDAP settings or AD group name or reset the AD account used to bind LDAP (the account is not locked).   Any idea how to fix this issue?  
I have a set of web servers with an inconsistent logging configuration. I've been unable to come up with a single monitor stanza to cover the following requirements: Input Paths   /usr/local2/we... See more...
I have a set of web servers with an inconsistent logging configuration. I've been unable to come up with a single monitor stanza to cover the following requirements: Input Paths   /usr/local2/webapps/brt/logs/access.log /usr/local2/webapps/admin/conf/ssl/logs/access_log /usr/local2/webapps/admin/logs/access.log   There are other filename patterns in these directories that I do not want to index:   <path>/access.log-2020-06-13-1592036161 <path>/access.log-2020-06-13-1592036161.gz   It seems I can pull everything in correctly with two [monitor] stanzas:   [monitor:///usr/local2/webapps/v-*/.../logs/access*] index = web sourcetype = access_combined crcSalt = <SOURCE> whitelist = access[\._]log$ [monitor:///usr/local2/webapps/v-*/logs/access*] index = web sourcetype = access_combined crcSalt = <SOURCE> whitelist = access[\._]log$   For my own education, I'm trying to produce a more simplistic input stanza by collapsing them into a single stanza and getting rid of the whitelist. So far I've been unsuccessful, it seems mostly due to the behavior of the ... and * wildcards. Is there a way to combine and simplify those two inputs into one?
When going through the SAML CONFIGURATION SETUP on splunk enterprise is the ENTITY ID a field that I can put anything in and it just needs to match with what I have on the IDP side?  In essense I ca... See more...
When going through the SAML CONFIGURATION SETUP on splunk enterprise is the ENTITY ID a field that I can put anything in and it just needs to match with what I have on the IDP side?  In essense I can put in BLAHBLAHBLAH for the splunk side and the IDP side and it would be okay or is the entity ID hard coded in an auth config somewhere in splunk.
I''m trying to figure out a way to sort events by how similar the wording in a free-form text field is. Generate sample data:   | makeresults | eval raw="1:i like cats,2:i like turtles,3:i like t... See more...
I''m trying to figure out a way to sort events by how similar the wording in a free-form text field is. Generate sample data:   | makeresults | eval raw="1:i like cats,2:i like turtles,3:i like turtles,4:cats are mean,5:mary had a little lamb" | makemv delim="," raw | rex field="raw" "(?<event_id>\d):(?<event_log>.+)" | table event_*   Sample data output: event_id event_log 1 i like cats 2 i like turtles 3 i like turtles 4 cats are mean 5 mary had a little lamb   The output I'm after must yield a value that I can sort or filter on to identify the events with the most similar text. None of the specifics of the examples below are important - percent shared words is preferred but I can work with count of shared words and likely other outputs. The formatting of the example is not important, e.g. a MV field would be just fine in place of the CSV field "event_ids". Myriad other considerations, like how exactly to split on words that may contain punctuation, etc, will be handled later. Satisfactory output example - using percent shared words: similarity event_ids 100% 2, 3 66% 1, 2 66% 1, 3 33% 1, 4   I've tried a good handful of things involving splitting followed by multiple rounds of stats by but I can't quite get there. I'm familiar with the Levenshtein feature of the URL Toolbox too but I couldn't think of how to use it to compare each event with every other event. FWIW this solution does not need to be especially performant - it will process a few hundred events at a time on a schedule, so expensive options like map and foreach black magic are acceptable. Half-baked ideas welcome
I have a dashboard where all the panels use the same base search. I now want to add another panel that uses the same base search query, but that specifies a different time range to what is used elsew... See more...
I have a dashboard where all the panels use the same base search. I now want to add another panel that uses the same base search query, but that specifies a different time range to what is used elsewhere.  Is it possible to override the <earliest> and <latest> nodes that are specified in my base search? The time range i want to use is outside of that in my base search. I get the validation warning: Unknown node <latest> Node <latest> is not allowed here  
Here is a sample of my log:     { NIC: { eth2: { linkSpeedInKbps: 10000000 macAddress: XX:XX:XX:XX:XX:XX name: eth2 stats: { network.dropped_received_p... See more...
Here is a sample of my log:     { NIC: { eth2: { linkSpeedInKbps: 10000000 macAddress: XX:XX:XX:XX:XX:XX name: eth2 stats: { network.dropped_received_pkts: 0 network.dropped_transmitted_pkts: 0 network.error_received_pkts: 0 network.error_transmitted_pkts: 0 network.received_pkts: 760176 network.received_rate_kBps: 19842 network.transmitted_pkts: 3140672 network.transmitted_rate_kBps: 143753 } } eth3: { linkSpeedInKbps: 10000000 macAddress: XX:XX:XX:XX:XX:XX name: eth3 stats: { network.dropped_received_pkts: 0 network.dropped_transmitted_pkts: 0 network.error_received_pkts: 0 network.error_transmitted_pkts: 0 network.received_pkts: 1068 network.received_rate_kBps: 2 network.transmitted_pkts: 2 network.transmitted_rate_kBps: 0 } } } nodeName: MyServer01 }     I am capturing basic network information on the servers in my environment. I would like to format a dashboard to look something like this:   I can't figure out how to get the chart to format correctly. I have tried the following:     index=mylogs sourcetype=serverstats nodeName=MyServer01 | chart latest("NIC.*.name") as "*",latest("NIC.*.linkSpeedInKbps") as "* Speed", latest("NIC.*.macAddress") as "* MAC Address" by "NIC.*.name"     And I don't get any results.   I am capturing the information and logging it. I can change the format of the log if I need to. Does anyone have any ideas on how I can get this to work?