All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  Messages Nov 20 Dec 20 Jan 20 Feb 20 Messge 0 0 1 0 0 Messge 1 1 3 1 1 Messge 2 11 0 0 0 Messge 3 1 0 0 0 Messge 4 9 5 0 0 Messge 5 1 1 0 0 Mess... See more...
  Messages Nov 20 Dec 20 Jan 20 Feb 20 Messge 0 0 1 0 0 Messge 1 1 3 1 1 Messge 2 11 0 0 0 Messge 3 1 0 0 0 Messge 4 9 5 0 0 Messge 5 1 1 0 0 Messge 6 1 1 0 0 Messge 7 0 1 0 0   Here i want to color the columns background based on previous column value Nov 20 Dec 20 0 1(green bg) 11 0(red bg) 1 0(red bg) 9 5(red bg) 1 1(yellow bg) 1 1(yellow bg) 0 1(green bg)   Comparing condtions if the current values is more than previous column value then it should have a green-background if the current values is less than previous column value then it should have a red-background if the current values is equal to previous column value then it should have a yellow-background 
Currently I am running into an issue where if there is a person logs onto a server multiple times, it combines. Any ideas on how to split? Here is sample data. Currently I am using | stats values(... See more...
Currently I am running into an issue where if there is a person logs onto a server multiple times, it combines. Any ideas on how to split? Here is sample data. Currently I am using | stats values(*) as * by Host Account_Name From This: Host Account_Name Duration Session_End Session_Start fdk-DC01 jfrank   1612536779 1612558813 1612536778 1612558812 fdk-DC01 ptom 00:00:02 1612563697 1612563695 fdk-Host01 jfrank 00:00:05 1612539322 1612539317 fdk-Host03 bhill   1612540329 1612543822 1612540323 1612543816   To This: Host Account_Name Duration Session_End Session_Start fdk-DC01 jfrank  00:00:03 1612536779 1612536778 fdk-DC01 jfrank 00:00:07 1612558813 1612558812 fdk-DC01 ptom 00:00:02 1612563697 1612563695 fdk-Host01 jfrank 00:00:05 1612539322 1612539317 fdk-Host03 bhill  00:00:09 1612540329 1612540323 fdk-Host03 bhill 00:00:010 1612543822 1612543816     Thank you for any pointers!
I have a accelerated data model where I would like to run multiple searches. Total of four searches running to find data going back four weeks ( eval _time = -7d@d, eval _time = -14d@d , ect)   Is t... See more...
I have a accelerated data model where I would like to run multiple searches. Total of four searches running to find data going back four weeks ( eval _time = -7d@d, eval _time = -14d@d , ect)   Is there a way to run a multisearch using tstats that would run through by accelerated data model or is there a way to do this over the pivot table?   Please let me know if I need to provide more context.
I have search that runs every day that populates a CSV that looks like this (I have more sources, but wanted to keep it more simple to explain): Source Total Server Workstation Other Unk... See more...
I have search that runs every day that populates a CSV that looks like this (I have more sources, but wanted to keep it more simple to explain): Source Total Server Workstation Other Unknown date norton 735 178 542 5 10 1612548722 nessus  857 8 829 9 11 1612548722 I would like a time graph to show each source over time, is this possible? I've tried a few methods, but can't seem to manipulate the data to get it to work right.  I know the data will have to be converted using SPL like this  |fieldformat date = strftime(date, "%m/%d/%Y"). Any ideas how how to make a time graph by source over time?  Thanks!
Splunk Statics Table - How to get the max of column and use it to evaluate each row Hello, looking for advice and recommendations. I have a splunk query  index=idx_source1 source=*app.log* clientE... See more...
Splunk Statics Table - How to get the max of column and use it to evaluate each row Hello, looking for advice and recommendations. I have a splunk query  index=idx_source1 source=*app.log* clientEntitlementsCacheDataRetriever clientCount|table _time,host,clientCount I am trying to get the max value of the clientCount  then use that value to compare to the each host.  The idea to make are report/alert of host not having all the clients in cache. I suspect a subquery could be used but not sure  that will work on a report.  Need Help - from banging my Head more   Steven
Is there a way to search all ES Investigations for a specific artifact or IOC that may be documented in the notes?
Trying to use the DECRYPT app and I keep getting an error.  I have it installed in a SH cluster and commands.conf has local=true and there is no streaming setting so that should default to false so i... See more...
Trying to use the DECRYPT app and I keep getting an error.  I have it installed in a SH cluster and commands.conf has local=true and there is no streaming setting so that should default to false so it doesn't run on the indexers.  However, I'm still getting errors from the indexers "Streamed search execute failed because: Error in 'decrypt' command: External search command exited unexpectedly with non-zero error code 1" https://splunkbase.splunk.com/app/2655/ Any suggestions?
Need to run a dbxquery command via the REST API, and having trouble defining the search's time range in that context. Below I demonstrate how the queries appear in the web UI, versus the commandline ... See more...
Need to run a dbxquery command via the REST API, and having trouble defining the search's time range in that context. Below I demonstrate how the queries appear in the web UI, versus the commandline with curl. Either the query is invalid because | dbxquery needs to be at the beginning of the query, or no results are returned when appending  | search earliest=-1day latest=now to the end of the query. How can I correctly specify a time range when using | dbxquery via REST API?   [root@host ~]# curl -u user:password -k https://192.168.xx.xxx:xxxx/services/search/jobs/export --data-urlencode search='seary connection="xxx" query="SELECT (SELECT sum(bytes) FROM dba_data_files)+ (SELECT sum(bytes) FROM dba_temp_files)- (SELECT sum(bytes) FROM dba_ <?xml version='1.0' encoding='UTF-8'?> <response><messages><msg type="FATAL">Error in 'dbxquery' command: This command must be the first command of a search.</msg></messages></response> [root@host ~]# curl -u user:password -k https://192.168.xx.xxx:xxxx/services/search/jobs/export --data-urlencode search='| dT (SELECT sum(bytes) FROM dba_data_files)+ (SELECT sum(bytes) FROM dba_temp_files)- (SELECT sum(bytes) FROM dba_free_space) total_size FROM dual; <?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder> <field>TOTAL_SIZE</field> </fieldOrder> </meta> <messages> <msg type="DEBUG">Configuration initialization for /opt/splunk/etc took 15ms when dispatching a search (search ID: 1612554968.128577)</msg> <msg type="DEBUG">The 'dbxquery' command is implemented as an external script and may cause the search to be significantly slower.</msg> <msg type="DEBUG">search context: user="reporting", app="search", bs-pathname="/opt/splunk/etc"</msg> </messages> </results>  
Hello All,    I am new to splunk and I have a question regarding the splunk field extraction. Consider the following example log snippet which consists of 4 events.  The error messages are the same... See more...
Hello All,    I am new to splunk and I have a question regarding the splunk field extraction. Consider the following example log snippet which consists of 4 events.  The error messages are the same except for the field "sku" , time stamp and OrderNumber.  After the below log has been ingested into splunk --If I were to search for the field  "errorMessage" I will get 4 results which are the below events.    On the other hand if I were to consider all the below events as one format (consider all the four events as duplicate) by ignoring the all the other key value pairs except  "errorMessage" --can this be done without ever asking splunk to ignore the "sku" field ?      [2021-02-05 18:00:00.00 GMT]  ERROR  OrderNumber|0001|component="DeltaInventory",errorMessage="Cannot find parent",sku="0001"   [2021-02-05 19:00:00.000 GMT]  ERROR  OrderNumber|0002|component="DeltaInventory",errorMessage="Cannot find parent",sku="0002"   [2021-02-05 20:00:00.00 GMT]  ERROR  OrderNumber|0003|component="DeltaInventory",errorMessage="Cannot find parent",sku="0003"   [2021-02-06 21:00:00.00 GMT]  ERROR  OrderNumber|0004|component="DeltaInventory",errorMessage="Cannot find parent",sku="0004"   Thanks!  
Hello... I'm trying to create a lookup table of windows hosts running CrowdStrike, Tenable, Bitlocker, Splunk, and DHCP then return the state of the service. If the service is not installed, then no... See more...
Hello... I'm trying to create a lookup table of windows hosts running CrowdStrike, Tenable, Bitlocker, Splunk, and DHCP then return the state of the service. If the service is not installed, then note that the service is not installed. The problem I have is the service status in not reporting back as expected in each column. Host CrowdStrike Tenable Splunk Bitlocker PC-01 Running Not installed Running Stopped         index=windows source="kiwi syslog server" Name="CSFalconService" OR Name="Tenable Nessus Agent" OR Name="SplunkForwarder" OR Name="Dhcp" OR Name="BDESVC" | stats values(*) AS * max(_indextime) as indextime BY host | eval crowdstrike=if(Name=="CSFalconService", State ,"CS Agent Not Installed") | eval tenable=if(Name=="Tenable Nessus Agent", State , "Tenable Agent Not Installed") | eval splunk=if(Name=="SplunkForwarder", State, "Splunk Agent Not Installed") | eval bitlocker=if(Name=="BDESVC", State,"Bitlocker Service Not Installed") | table host crowdstrike tenable splunk bitlocker         Any help would be greatly appreciated!
Have Palo Alto logs being sent to syslog-ng server. A UF is on the syslog-ng and forwarding logs to Heavy Forwarder. I have a list of specific firewall (hostnames) and zones that I need to filter a c... See more...
Have Palo Alto logs being sent to syslog-ng server. A UF is on the syslog-ng and forwarding logs to Heavy Forwarder. I have a list of specific firewall (hostnames) and zones that I need to filter a copy of the traffic by and send to different (separate)  indexer . Is it possible to filter and route using either the UF or HF?
I am trying to figure out how to display all of the reverse matches in a list by each event. This would include showing the original event and all events that match with that event except for having ... See more...
I am trying to figure out how to display all of the reverse matches in a list by each event. This would include showing the original event and all events that match with that event except for having the fields reversed. For example:   Foo Bar   Would be displayed along with all reverse matches represented by:   Bar Foo    So if there were a thousand or so values, it would go down the list and find all reverse matches.   Foo Bar | Bar Foo | Bar Foo | Bar Foo ----------------------------- Hello There| There Hello | There Hello ----------------------------- Src Dst | Dst Src | Dst Src | Dst Src | Dst Src   I am not sure where to go from here (https://wiki.splunk.com/Deploy:Combine_bi-directional_network_logs). Using the example from this page, if I wanted to find bidirectional communications using these logs:   2007-09-14 10:54:58.130 0.896 TCP 216.129.82.250:2691 -> 209.104.58.141:80 3 144 1 2007-09-14 10:54:55.378 5.184 TCP 209.191.118.103:25 -> 209.104.37.200:26490 26 1453 1   I would want to search based on Source IP/Port and Destination IP/Port. I would be looking for matches based on those flipped values like:   <date> <time> <duration> <protocol> 209.104.58.141:80 -> 216.129.82.250:2691 <etc.> <date> <time> <duration> <protocol> 209.104.37.200:26490 -> 209.191.118.103:25 <etc.>   Field names would be "src_ip", "src_port", "dst_ip", and "dst_port".
I am trying to create a query to see how long it takes for my kv pairs to be returned. I have the kv pairs set up in java. Basically i need to know if the duration of time it takes is longer than xx ... See more...
I am trying to create a query to see how long it takes for my kv pairs to be returned. I have the kv pairs set up in java. Basically i need to know if the duration of time it takes is longer than xx milliseconds and then count both those taking longer and shorter. How would i go about a query like that?
Question to all: I need to address a vulnerability we found, CVE-2021-3177.  In order to do this, I need to upgrade Python to at least 3.9.1-3.   My two questions on this: 1. Will this break Splu... See more...
Question to all: I need to address a vulnerability we found, CVE-2021-3177.  In order to do this, I need to upgrade Python to at least 3.9.1-3.   My two questions on this: 1. Will this break Splunk? 2. How can I accomplish this?   Thanks
Hi,  I have 14 alerts that cover all the infrastructure, my company uses. I get my data from a data bus every 60 minutes, but when that fails and it can for (several hours at a time). I would like to... See more...
Hi,  I have 14 alerts that cover all the infrastructure, my company uses. I get my data from a data bus every 60 minutes, but when that fails and it can for (several hours at a time). I would like to not have to rerun the alerts manually. As a note I don't any elevated access to the Splunk instance or the environment so I can't install apps, add-ins, or update any conf files, but I do have access to the audit and internal indexes.  Ideally I'd like my conditional trigger to be something like this: index=_internal sourcetype=scheduler status=!success savedsearch_name="Stuff to search" | table _time search_type status user app savedsearch_name result_count |where result_count=0  then <search/commands to rerun alert every hour until results are in>
hey all looking for some help pulling some digits via regex. I am looking to pull the numbers directly after Actual value(in the example event below 48). I would like to exclude the quotes and comma ... See more...
hey all looking for some help pulling some digits via regex. I am looking to pull the numbers directly after Actual value(in the example event below 48). I would like to exclude the quotes and comma if possible.    LogName=LoginPI Events EventCode=1300 EventType=4 ComputerName=RNBSVSIMGT02.rightnetworks.com SourceName=Login Threshold Exceeded Type=Information RecordNumber=285782 Keywords=Classic TaskCategory=None OpCode=Info Message={ "Description": "Total login time (48s) exceeded threshold of 45s (6.67%)", "Actual value": "48", "Threshold value": "45", "AccountId": "4c06e54e-ab5f-47a6-2cc7-08d807c9fae2", "AccountName": "rightnetworks\\eloginpi049", "LauncherName": "RNBSVSI21", "Locale": "English (United States)", "RemotingProtocol": "Rdp", "Resolution": "1920 × 1080", "ScaleFactor": "100%", "TargetHost": "BPSQCP00S143", "TargetOS": "Microsoft Windows Server 2016 Standard 10.0.14393 (1607)", "EnvironmentName": "BPSQCP00S143", "EnvironmentId": "06a3c4a2-6f73-4c54-94e9-08d8040960f8", "Title": "Login time threshold exceeded"
What Windows & Linux and other logs need to be sent to Splunk to pass a GSA gov. audit?
Hello all, We are new to Splunk , learning and working SLO/SLIs defined for the application.  We are confused in the beginning itself at RESULTs from a SEARCH as below: 1,092 events (2/5/21 2:45:00... See more...
Hello all, We are new to Splunk , learning and working SLO/SLIs defined for the application.  We are confused in the beginning itself at RESULTs from a SEARCH as below: 1,092 events (2/5/21 2:45:00.000 PM to 2/5/21 3:45:29.000 PM) Failed 724 Success 722 Question : Failed and Success should match # 1,092 events or we are missing anything in the following SEARCH sourcetype="cf:logmessage" | fields msg.message | spath | rename msg.message as message | eval "test" = case('message'="Finished running cron job.","Success" , 'message'="No trips ready to process.","Failed" , 1=0 , 'message') | stats count(message) by test We got a bunch of requirements, 1st requirement is to show up % of Success and % of Failed in Chart(May be a PIE chart). Thanks and Regards, Bojja  
Hello,   I am working on a Python script that uses the gitPython package and I am trying to distribute the app as a TA. But I'm having a hard time with how I distribute the dependency (gitPython). ... See more...
Hello,   I am working on a Python script that uses the gitPython package and I am trying to distribute the app as a TA. But I'm having a hard time with how I distribute the dependency (gitPython). I checked out the code another TA and it seemed like they had all of their dependencies in a local file and were able to import them. I've only used pip to install and work with dependencies. How do I install my dependencies to a local folder and how I import them from that folder? UPDATE: I figured it out with the help of madscient and alacercogitatus on the Slack.   So to install a package locally:     pip3 install -t ./gitpython gitpython     Then I had to move the git folder from the gitPython package:     mv gitpython/git ./git     so that the structure is this: index.py <git> ----Git's files Then I just used     from git import Repo   in the python script and changed my calls for the repo command to 'Repo'. Not deleting this just in case someone else needs help and finds this.
Hello, I have some alerts set up in an instance of ITSI. The emails are configured to provide a link to the results of the search used to trigger the alert. Whenever myself of other Splunk admins tr... See more...
Hello, I have some alerts set up in an instance of ITSI. The emails are configured to provide a link to the results of the search used to trigger the alert. Whenever myself of other Splunk admins try to click the link, it goes to Splunk Cloud and says "No valid Splunk role found in local mapping." Does anybody know how to configure the email to use a link that goes to the correct ITSI instance of Splunk? Thank you!