All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a Splunk Enterprise cluster (version 8.1.3) that for some reason, is not returning any results for indexed real-time searches, but regular searches and regular real-time searches work just fin... See more...
I have a Splunk Enterprise cluster (version 8.1.3) that for some reason, is not returning any results for indexed real-time searches, but regular searches and regular real-time searches work just fine. When I have my search app configured with indexed_realtime_use_by_default = false, my real time searches return fine. When indexed_realtime_use_by_default is true, it returns no data for the same search. If I change the search from a real time search to any sort of historical search, I also get search results, including over the same time period my real time search is running. Does anyone have any suggestions what I should look into?
Hello Hello I have the following Splunk search syntax which returns me detailed log connection for a all user to the VPN concentrator (F5) in the past 90 days. I would need to do the same search o... See more...
Hello Hello I have the following Splunk search syntax which returns me detailed log connection for a all user to the VPN concentrator (F5) in the past 90 days. I would need to do the same search only for 30 login_name users from csv, how can i build the search syntax? my actual search for all user with " | search login_name=*  "  is :       index=index-f5 sourcetype="f5:bigip:apm:syslog" ((New session) OR (Username) OR (Session deleted)) | transaction session_id startswith="New session" endswith="Session deleted"| rex field=_raw "Username '(?<login_name>.\\S+)'" | search login_name=* | eval sessione_time=tostring(duration, "duration")| table _time login_name session_id session_time          
I found how-to links for generating CSR's for Inter-Splunk communication and for the Splunk Web site to be able to use 3rd party generated certs.   However, the processes are almost identical, so I... See more...
I found how-to links for generating CSR's for Inter-Splunk communication and for the Splunk Web site to be able to use 3rd party generated certs.   However, the processes are almost identical, so I'm wondering if I need to do this process twice so that each use case get their own cert.  Or if I only have to do it once and use the single resulting Cert for both use cases since technically the common name would be the same for both? I couldn't find anything in the documentation that stated this either way. https://docs.splunk.com/Documentation/Splunk/8.2.5/Security/Howtogetthird-partycertificates https://docs.splunk.com/Documentation/Splunk/8.2.5/Security/Getthird-partycertificatesforSplunkWeb  
I've below search:   | tstats summariesonly=true count, sum(All_Traffic.bytes) as total_bytes, sum(All_Traffic.packets) as total_packets from datamodel=Network_Traffic by All_Traffic.src_ip, All_... See more...
I've below search:   | tstats summariesonly=true count, sum(All_Traffic.bytes) as total_bytes, sum(All_Traffic.packets) as total_packets from datamodel=Network_Traffic by All_Traffic.src_ip, All_Traffic.dest_ip, All_Traffic.action | rename "All_Traffic.*" as * | stats sum(total_bytes) as total_bytes, sum(total_packets) as total_packets by src_ip dest_ip action | sort 0 -total_bytes | streamstats count as count by action | search count<=20     The purpose of using the last 3 lines with sort and streamstats is I want the top 20 results by total_bytes from each value of the action field. The only problem with this solution is that streamstats has a limit of 10000 in limits.conf. Do we have any better solution for this?
Can anyone help me figure out how to export a Dashboard Studio page into a multipage pdf?  Currently if i try to export a long dashboard it will shrink everything into one page and not detect what is... See more...
Can anyone help me figure out how to export a Dashboard Studio page into a multipage pdf?  Currently if i try to export a long dashboard it will shrink everything into one page and not detect what is text and what is an image.   Simple XML dashboards would lose formatting but would at least break the export into multiple pages.
Hi, I have tried many different ways to get a match with a like using a token to a string to set and unset a different set of tokens but I just cant seem to be able to meet the condition eventough ... See more...
Hi, I have tried many different ways to get a match with a like using a token to a string to set and unset a different set of tokens but I just cant seem to be able to meet the condition eventough I know I am selecting a click.value (which gets saved into a token) and that token value contains the string that I am using in the like command. What am I doing wrong? please help. <chart> <search> <query>index=car | dedup run_id | top limit=100 sourcetype | search sourcetype=$form.car_type$</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="charting.chart">column</option> <drilldown> <condition> <set token="show_panel">true</set> <set token="form.car_type">$click.value$</set> <set token="clickedfixture">$click.value$</set> </condition> <condition match="like($form.car_type$,&quot;%ford%&quot;)"> <set token="carford">true</set> <unset token="data_entry"></unset> <unset token="attachment"></unset> </condition> </drilldown> </chart>
Hi, im having issues with ITSI Glass Table. We have created a layout and set primary data and the data was there and can be visualise. After complete all the configuration, we click save and refresh ... See more...
Hi, im having issues with ITSI Glass Table. We have created a layout and set primary data and the data was there and can be visualise. After complete all the configuration, we click save and refresh the page, all the data configuration become invalid (which means its not being visualise, become null data). Then we try to redo the data cofiguration, save and refresh the page, the data will become null value.  I tried to clone the glass table and reconfigure the data, it still not working. It's seem like the apps are having bugs or something else. Is this one of the frequent issue for ITSI? Is there anything I can do to resolve this issue? Please assist. 
I have a lookup file that I am generating with a query.  The query results in ~59,000 rows currently. If I run the query in the free form Splunk search then the CSV file is populated with all 59,00... See more...
I have a lookup file that I am generating with a query.  The query results in ~59,000 rows currently. If I run the query in the free form Splunk search then the CSV file is populated with all 59,000+ entries. But if I schedule that query to run via a report overnight it truncates to 50,000 entries in the CSV file.  What I'm trying to reconcile about the scheduled report is: 1. Under View Recent it took 29s to run so it finished in under any 60s limit:   00:00:29 2. Under View Recent it says it found 59,633 rows for a size of 8.88MB: 3. The Job also says it finished and returned 59,633 results in 28.612 seconds I've seen a few questions around the 50k limit and stanzas that can increase it. But my questions are: 1. Nothing in the View Recent or Job warns that it has truncated the results. 2. Why does scheduling the report diff in limitations from running it in free form search?  
I have a query that calculates the daily availability percentages of a given service for a set of hosts and is used to create a multi-series line chart in a Splunk dashboard.  My ps.sh is running eve... See more...
I have a query that calculates the daily availability percentages of a given service for a set of hosts and is used to create a multi-series line chart in a Splunk dashboard.  My ps.sh is running every 1800 seconds (30 minutes) on my Splunk forwarders, so I assume that it has run a total of 48 times on any given day to calculate the availability in an eval.  The problem is that on the current date, the ps.sh hasn't run all 48 times yet, so I can't get a valid calculation for the current date.  However, if I was able to check if the date in question was the current date, then calculate the number of seconds that have elapsed since the nearest midnight, I could divide that figure by 1800 to figure out the total number of times ps.sh would've run so far that day (hopefully I'm not overcomplicating this).  To illustrate, here's my query with the pseudo-code of desired logic in it using rhnsd as an example process:   index=os host="my-db-*" sourcetype=ps rhnsd | timechart span=1d count by host | untable _time host count | addinfo | eval availability=if(<date is current date>,count/floor((info_max_time-<nearest midnight time>)/1800)*100,if(count>=48,100,count/48*100)) | rename _time as Date host as Host availability as Availability | fieldformat Date = strftime(Date, "%m/%d/%Y") | xyseries Date Host Availability   Any help I could get with completing the above eval would be greatly appreciated, or if I'm overcomplicating this, any alternative methodologies would be more than welcome.
We have a list of Ips in a lookup table and we want to search events that doesn't match with them. The lookup definition "scanners_lookup" has a field called "Ip_Scanner" and the events in the inde... See more...
We have a list of Ips in a lookup table and we want to search events that doesn't match with them. The lookup definition "scanners_lookup" has a field called "Ip_Scanner" and the events in the index we are looking for has another called "source_ip". How do we build the search? We have tried several approachs that don't work. For instance: index=my_index | lookup scanners_lookup  Ip_Scanner | where source_ip != IP_scanner Thank you!
I am looking for an add-on/API which can help to onboard all crowdstike related information to splunk. I see that there is "CrowdStrike Falcon Devices Technical Add-On" available, it retrieves deta... See more...
I am looking for an add-on/API which can help to onboard all crowdstike related information to splunk. I see that there is "CrowdStrike Falcon Devices Technical Add-On" available, it retrieves detailed data that the CrowdStrike Falcon sensor has collected about the device. It does not collect the list of software installed on those devices.   For example, We have 5000+ windows servers, and I want to check if XYZ software is installed or not ! Is there a way to collected installed software related info into splunk ? Many thanks in advance!  
Hi All, We are running an Splunk action - run query (search) on a Phantom playbook which is active on every event coming on to phantom. However, at times the action - run query (search) fails with ... See more...
Hi All, We are running an Splunk action - run query (search) on a Phantom playbook which is active on every event coming on to phantom. However, at times the action - run query (search) fails with message: Failed to acquire lock named '08' for Action: 'run query', App: 'Splunk'. Failed to acquire the lock '08'. The action - run query (search) tries to execute for 30 minutes or so and eventually fails. Any help to troubleshoot this issue is highly appreciated.   Thanks in advance  
Hello, We have a monitoring console that works great. I am able to connect directly to the server containing the console and get everything I need. However, we need to start mixing this data with o... See more...
Hello, We have a monitoring console that works great. I am able to connect directly to the server containing the console and get everything I need. However, we need to start mixing this data with other dashboards on the search heads. This is where I come up dry.   rest splunk_server=local /services/cluster/master/indexes on the (Monitoring Console Server" returns data, unformatted data, but still data.   running the below search on the search heads returns nothing. It doesn't error out, just no data is returned | rest splunk_server=(Monitoring Console Server) /services/cluster/master/indexes   Where should I go from here? Any/all help is appreciated.   Thanks!
Hi, I have a field "IT_Managed" and its values are "Yes" or "No". I need the count AND percentage of events with "YES". It appears I am not using the stats and eval commands correctly. Here is m... See more...
Hi, I have a field "IT_Managed" and its values are "Yes" or "No". I need the count AND percentage of events with "YES". It appears I am not using the stats and eval commands correctly. Here is my code:   Can you please help? Thanks
Hi, I have configured a Linux server to send events to Syslog-ng but now want to use the Splunk Add-on for Unix and Linux to make the parsing easier but looking at the inputs.conf it only seems rel... See more...
Hi, I have configured a Linux server to send events to Syslog-ng but now want to use the Splunk Add-on for Unix and Linux to make the parsing easier but looking at the inputs.conf it only seems relevant to a UF install. Has anyone manipulated it so  the same results are achieved via a syslog ingest?
I have the Splunk Add-on for Microsoft Office 365 app running and collecting all of the inputs successfully with t he exception of the Audit Logs input. I have it collecting logs from multiple O365 t... See more...
I have the Splunk Add-on for Microsoft Office 365 app running and collecting all of the inputs successfully with t he exception of the Audit Logs input. I have it collecting logs from multiple O365 tenants, and all of them have  the same errors with  the  Audit Log Input. The _internal  log has the errors indicating its an issue with the username and  credentials. This app doesn't using credentials, it uses keys.  The keys for the Azure app are valid, and not expired.  I can log in successfully to the tenant with the same credentials that are show in the error message. The error is below and has been sanitized. 2022-03-30 09:10:08,938 level=DEBUG pid=8229 tid=MainThread logger=splunk_ta_o365.modinputs.graph_api.GraphApiConsumer pos=GraphApiConsumer.py:_ingest:79 | datainput=b'se_audit_log_signins' start_time=1648645805 | message="ingesting message " message=graphApiMessage(id='XXXXXXXX-YYYY-XXX5-YYYY-ZZZZZZZZ', update_time=datetime.datetime(2022, 3, 30, 13, 10, 8, 751629), data='{"id": "XXXXXXXX-aXX-4cXXX-XXXX-XXXXXXXX", "createdDateTime": "2022-03-29T14:44:07Z", "userDisplayName": "XXXX XXXX", "userPrincipalName": "XXXX@YYYY.com", "userId": "XXXXXXXXXXXXXXXXXX", "appId": "00000002-0000-0ff1-ce00-000000000000", "appDisplayName": "Office 365 Exchange Online", "ipAddress": "123.123.122.123", "clientAppUsed": "Reporting Web Services", "correlationId": "XXXXXXXX-YYYY-ZZZZ-QQQQQQQQ", "conditionalAccessStatus": "notApplied", "isInteractive": true, "riskDetail": "none", "riskLevelAggregated": "none", "riskLevelDuringSignIn": "none", "riskState": "none", "riskEventTypes": [], "riskEventTypes_v2": [], "resourceDisplayName": "Office 365 Exchange Online", "resourceId": "XXXXXXXX-0000-0XXX-XX00-000000000000", "status": {"errorCode": 50126, "failureReason": "Error validating credentials due to invalid username or password.", "additionalDetails": "The user didn\'t enter the right credentials. \\u00a0It\'s expected to see some number of these errors in your logs due to users making mistakes."}, "deviceDetail": {"deviceId": "", "displayName": "", "operatingSystem": "", "browser": "Python Requests 2.22", "isCompliant": false, "isManaged": false, "trustType": ""}, "location": {"city": "somewhere", "state": "XXXXXX", "countryOrRegion": "US", "geoCoordinates": {"altitude": null, "latitude": XX.XXXX, "longitude": -XX.XXXX}}, "appliedConditionalAccessPolicies": []}', key='XXXXXX-XXXX-XXXX-XX-XXXXXXXXX')   Any thoughts?  Its working for all other inputs. Thanks, Robert    
Hi Team, I have recently installed Splunk enterprise free trail  in my pc. Created and hec event collector and hec token. I want to send some data to my splunk instance from an external client(... See more...
Hi Team, I have recently installed Splunk enterprise free trail  in my pc. Created and hec event collector and hec token. I want to send some data to my splunk instance from an external client(system). But my splunk url shows only http://<IPaddress>:<port>  using which getting connection refused or invalid server. Can you please suggest how to get the correct host name and url of my splunk system, so that i can trigger data from my client to splunk thanks, Kumar
 part 1 - I have already grouped the events based on log.level (which has values like error,info,warn,fatal) stats count(log.level) by log.level .  current output log.level  count error        ... See more...
 part 1 - I have already grouped the events based on log.level (which has values like error,info,warn,fatal) stats count(log.level) by log.level .  current output log.level  count error          3 warn          31 fatal          1 info          7 part 2 - i have a multivalue field mulVal at different levels.i need to loop all fields to find those mulVal (at different levels) and get the first not null mulVal field's value . if that field itself is not found in any levels then i need to consider it as "no value" for that event. next i need to get the mulVal (if any mulVal's value found or "no value") and group it based on log.level as shown in part-1 and need to display the mulVal 's value of latest event in each group required output log.level      mulVal           count error             sample        3 warn             hello             31 fatal            no value           1 info             value                7 thanks in advance
Hi all, We have two reverse proxies, one front, one back. They both log http requests and responses to the same index. Each request has a unique-ID that is the same on the front and back. I would l... See more...
Hi all, We have two reverse proxies, one front, one back. They both log http requests and responses to the same index. Each request has a unique-ID that is the same on the front and back. I would like to correlate the front and back requests with the same unique-ID. So the two searches are something like this:       index=rpx proxy=front unique_id=* index=rpx proxy=back unique_id=*       Log lines would then look something like this (shortened for brevity):       proxy=front, unique_id=123456, time_taken=2ms proxy=back, unique_id=123456, time_taken=5ms       My goal is to have the delta time of the time_taken field and then display it in for instance a timechart avg. Maybe I should do the one search and correlate from the time_taken field from there?      
03 Mar 2022 10:08:18,188 GMT ERROR [dbdiNotificationService,ServiceManagement] {} - Caught Runtime exception at service dbdiNotificationService java.lang.IllegalArgumentException: No enum constant co... See more...
03 Mar 2022 10:08:18,188 GMT ERROR [dbdiNotificationService,ServiceManagement] {} - Caught Runtime exception at service dbdiNotificationService java.lang.IllegalArgumentException: No enum constant com.db.fx4capi.Fx4cApiLocal.TradeProcessingStatus.TRADE_STATUS_CANCELLED at java.lang.Enum.valueOf(Enum.java:238) ~[?:1.8.0_311] at com.db.fx4capi.Fx4cApiLocal$TradeProcessingStatus.valueOf(Fx4cApiLocal.java:10) ~[trade-22.1.1-8.jar:?] at com.db.fx4cash.trade.step.GetTradeReferenceAndStatusStep.step(GetTradeReferenceAndStatusStep.java:24) ~[step-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.executeIteration(TransactionDispatchService.java:275) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.startDispatch(TransactionDispatchService.java:673) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.TransactionDispatchService.run(TransactionDispatchService.java:91) [servicemanagement-22.1.1-8.jar:?] at com.db.servicemanagement.ServiceThread.run(ServiceThread.java:36) [servicemanagement-22.1.1-8.jar:?] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_311]     ---------------------------------------------------------------------------------------------------------------------------- in above string i need to capture string in bold ,  basically whatever comes after first line ERROR would like to capture  using below command ,  index=app_events_fx4cash_uk_prod source=*STPManager-servicemanagement.20220303-100818.log* | rex field=_raw "^[^\-\n]*\-\s+(?P<Error>.$)" |table error   am getting blank record, please help