All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I'm onboarding custom winevents to Splunk [WinEventLog://Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational disabled = 0 index = wineventlog above is the stanza I'... See more...
Hi Team, I'm onboarding custom winevents to Splunk [WinEventLog://Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational disabled = 0 index = wineventlog above is the stanza I'm using it but I'm not able to see logs in Splunk.
Hello Folks, Needed help with index based search for any user being added to multiple windows groups (preferably more then count of 5) in a time span of 15 mins . Thank you
Hello, I have a value (imagine the value is the "something" that I wrote in the image)  in a multiselect  filter that I wanted to remove\hide, is there a way to do that?  
Dear Community, i am trying to setup my companies Splunk setup to my home lab for learnig purposes. It is a single site cluster with indexer and search head cluster. So far i am able to setup the ... See more...
Dear Community, i am trying to setup my companies Splunk setup to my home lab for learnig purposes. It is a single site cluster with indexer and search head cluster. So far i am able to setup the cluster master followed by adding the indexer to the cluster master and then setup the shc cluster with its member, bootstrap and then add the shc members to the cluster master. But something i do wrong here, when i look to the MC on the cluster master then only the cluster master itself is showing up with all roles attached. The previous added machines are not appearing? One question ahead, is this kind of setup possible using the free licenses on each peer instead of a license master? thx in advance  
Hi There, I am attempting to ingest data from the WindowsUpdateLog using the Splunk Windows TA. I have attached a screenshot of the inputs.conf stanza relating to this file and demonstrated that the... See more...
Hi There, I am attempting to ingest data from the WindowsUpdateLog using the Splunk Windows TA. I have attached a screenshot of the inputs.conf stanza relating to this file and demonstrated that the file path is correct. Any help would be appreciated, Jamie
Hi, struggling with this for a while. I have an epoch time value (10 digit NUMBER) that I want to use as both rising column and timestamp for the event. The first works fine , but getting it to... See more...
Hi, struggling with this for a while. I have an epoch time value (10 digit NUMBER) that I want to use as both rising column and timestamp for the event. The first works fine , but getting it to use this for the event timestamp not. I know about the workaround of timestamp'1970-01-01 00:00:00' + ( "<my_field>" /86400 ) as eventTime, but preferably I do not ingest an extra field if it is not really necessary.   there is no config I can get to work for the timestamp. from the Java DateTimeFormatter it appears not possible, but just want to ask out here if anyone has ffound something   sources: https://docs.splunk.com/Documentation/DBX/3.13.0/DeployDBX/Troubleshooting   I have also tried using props.conf, [mysourcetype ] SHOULD_LINEMERGE=true NO_BINARY_CHECK=true TIME_FORMAT=%s TIME_PREFIX=my_field= but it doesnt even reach this part and errors out earlier error pattern: 2023-06-20 10:27:07.521 +0200 Trace-Id=940a0c4d-8c64-44d7-a948-39fd6e9b7417 [Scheduled-Job-Executor-5] ERROR org.easybatch.core.job.BatchJob - Unable to process Record: {header=[number=1, source="SHE170U-JOURNEY_TRACKER", creationDate="Tue Jun 20 10:27:07 CEST 2023"], payload=[HikariProxyResultSet@2091779680 wrapping oracle.jdbc.driver.ForwardOnlyResultSet@4f527ef2]} java.time.format.DateTimeParseException: Text '1687249036' could not be parsed at index 0 at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2052) at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1880) at com.splunk.dbx.server.dbinput.task.processors.ExtractIndexingTimeProcessor.extractTimestampFromString(ExtractIndexingTimeProcessor.java:112) at com.splunk.dbx.server.dbinput.task.processors.ExtractIndexingTimeProcessor.extractTimestamp(ExtractIndexingTimeProcessor.java:92) at com.splunk.dbx.server.dbinput.task.processors.ExtractIndexingTimeProcessor.processRecord(ExtractIndexingTimeProcessor.java:46) at org.easybatch.core.processor.CompositeRecordProcessor.processRecord(CompositeRecordProcessor.java:61) at org.easybatch.core.job.BatchJob.processRecord(BatchJob.java:209) at org.easybatch.core.job.BatchJob.readAndProcessBatch(BatchJob.java:178) at org.easybatch.core.job.BatchJob.call(BatchJob.java:101) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.runTask(InputServiceImpl.java:298) at com.splunk.dbx.server.api.resource.InputResource.lambda$runInput$1(InputResource.java:162) at com.splunk.dbx.logging.MdcTaskDecorator.run(MdcTaskDecorator.java:23) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833)  
Hi good people, Is it just me or has the new Lookup Editor 4.0.1 UI become super annoying to use. First the centering of the data in every cell is problematic if you want to compare with other r... See more...
Hi good people, Is it just me or has the new Lookup Editor 4.0.1 UI become super annoying to use. First the centering of the data in every cell is problematic if you want to compare with other rows. Should have stayed as left aligned in my opinion like every other spreadsheet. If you change the width of a column, it will collapse as soon as you edit a cell. Overall it is much slower than earlier versions. Only remedy is to export, edit and import again. Might be a small issue if you do not work with lookups, but if you like me do, it is really hair pulling. What are your thoughts?   
Hi All,  I have a problems about azure logs. I configed following the docs Azure and Splunk. I want to take all the logs for Add-on for Microsoft Office 365. I need information about Message Trac... See more...
Hi All,  I have a problems about azure logs. I configed following the docs Azure and Splunk. I want to take all the logs for Add-on for Microsoft Office 365. I need information about Message Trace, MS Teams, Mailbox. However I just got log for sourcetype o365:activity. How can I enable for other logs (Message Trace, MS Teams, Mailbox). Thanks for your help
Hi I need a python script template to calls the centreon API and to send a Splunk alert in this monitoring tools Consider that my alert is called "test" in $SPLUNK_HOME/etc/apps/myapp/local/saved... See more...
Hi I need a python script template to calls the centreon API and to send a Splunk alert in this monitoring tools Consider that my alert is called "test" in $SPLUNK_HOME/etc/apps/myapp/local/savedsearches.conf Could you send me a basic Python script example which is able to do that? Thanks
Hello, Could anyone offer an example/advice on some SPL for tracking down users running AD-HOC searches that search on all indexes please? Thank you
Each log event has more than 1 transaction because we are logging a mini batch log events. So, for every 2 minutes a bunch of transactions are logged as single event. Below is a sample. In this cas... See more...
Each log event has more than 1 transaction because we are logging a mini batch log events. So, for every 2 minutes a bunch of transactions are logged as single event. Below is a sample. In this case, how can I count a no of transactions like no. of Code and no. of minCode. If I do "timechart span=2m count" it gives each log event (contains multiple trans of mini batch logs) as 1. Please help me find the count of each transactions. Sample log event... 2021-05-11 21:36:33,634: {"level":"INFO","message":"COMMON_FIELDS - Code:1001 | Status:New | minCode:ABC"} {"level":"INFO","message":"COMMON_FIELDS - Code:1002 | Status:New | minCode:DEF"}{"level":"INFO","message":"COMMON_FIELDS - Code:1003 | Status:Modify | minCode:XYZ"}   2021-05-11 21:38:31,524: {"level":"INFO","message":"COMMON_FIELDS - Code:1011 | Status:New | minCode:RTY"} {"level":"INFO","message":"COMMON_FIELDS - Code:1012 | Status:New | minCode:HJK"}{"level":"INFO","message":"COMMON_FIELDS - Code:1013 | Status:Modify | minCode:VFR"}{"level":"INFO","message":"COMMON_FIELDS - Code:1014 | Status:New | minCode:KLO"}   The result I expect is something like this... using ==> | timechart span=2m count _time count 2021-05-11 21:26:00 3 2021-05-11 21:28:00 4   using ==> | timechart span=5m count _time count 2021-05-11 21:26:00 7  
I am ingesting data into Splunk Cloud using Cribl (not directly via GCP Add On) and using Google Cloud TA on the search head for Data enrichment .  Since the data is not ingested via Add on , I need ... See more...
I am ingesting data into Splunk Cloud using Cribl (not directly via GCP Add On) and using Google Cloud TA on the search head for Data enrichment .  Since the data is not ingested via Add on , I need to manually create the sourcetypes that match the Add on . I could able to identify all the sourcetypes used by google cloud Add on, except for GCP Firewall logs. Do you guys know what splunk sourcetype is used for GCP Firewall logs, so that GCP TA is satisfied by it ? 
Hi Splunk Community, I am looking to create a search that can help me extract a specific key/value pair within a nested json data. The tricky part is that the nested json data is within an ar... See more...
Hi Splunk Community, I am looking to create a search that can help me extract a specific key/value pair within a nested json data. The tricky part is that the nested json data is within an array of dictionaries with same keys. I want to extract a particular key/value within a dictionary only when a particular key is equal to a specific value.  Sample JSON below.. ------------------------------------------------------------------------------------------------------ {Key1: "Value1", Key2: { subKey2_1: "sub value1 for key2", subKey2_2: [ {subkey2_2_key1: "value1_sub22", subkey2_2_key2: "value2_sub22" }, {subkey2_2_key1: "value1_sub22_2", subkey2_2_key2: "value2_sub22_2 ---- value interested in " }, {subkey2_2_key1: "value1_sub22_3", subkey2_2_key2: "value2_sub22_3" } ], subKey2_3: "sub value3 for key2" }, Key3: "Value3", Key4: "Value4" } ------------------------------------------------------------------------------------------------------ I am looking to extract the value for  -->"subkey2_2_key2" when -- > subkey2_2_key1: "value1_sub22_2"
How to monitor Splunk system performance?
Hi Everyone,  I'm trying to utilize the APM Synthetic feature to setup a recurring ping to an API to health check its availability. This API requires passing an Azure JSON Web Token (JWToken) as par... See more...
Hi Everyone,  I'm trying to utilize the APM Synthetic feature to setup a recurring ping to an API to health check its availability. This API requires passing an Azure JSON Web Token (JWToken) as part of the header. To obtain the JWToken which expires in about 1 hour I need to invoke another API to go to Azure to provide me with the token value. The response to this API is coming in JSON format and I just need to retrieve one filed from this and after storing it to a Custom Variable, feed it to next API as part of the header.    I've been trying to use the "Setup" Step to store the extracted filed from the json response from Azure.  The field name is "access_token" in the sample json response below.  To extract this field I've used $.access_token, which is the standard way of getting a scalar value from the json structure below. I save this value as "accessToken" and in second API call I set the header as  "Bearer {{accessToken}}".  When I invoke the APIs set I get the error "Test validation: failed. Extract from response body failed". { "token_type": "Bearer", "expires_in": 3599, "ext_expires_in": 3599, "access_token": "fsdfhlasdkfhsldfhsd7459438753" }   Below is the "Script" view of my setup   [ { "configuration": { "name": "Token ", "request_method": "POST", "url": "https://apis-dev-rtfint.ace.aaaclubnet.com/dev-s-generate-identitytoken/v1/get-token-client-secret", "headers": { "client_id": "218365067aca423e8301421e0643781b", "client_secret": "F052D7037b0C498A889fD19f22A140Af", "ACCEPT": "application/json" }, "body": "{\n \"tenant\" : \"d5f618ff-2951-8f7e-999c2dd97ab2\",\n \"client_id\" : \"836546d7-a27c-4ad2-87b1-86d94a5\",\n \"scope\" : \"api://aceclubnet.onmicrosoft.com/mulesoft/membership/.default\",\n \"grant_type\" : \"client_credentials\",\n \"client_secret\" : \"IZn8Q~rOnfaE9zCvHd7g2yoD\"\n}" }, "setup": [ { "name": "Extract from response body", "type": "extract_json", "source": "{{response.body}}", "extractor": "$.access_token", "variable": "accessToken" } ], "validations": [] }, { "configuration": { "name": "Request TSO Credential", "request_method": "GET", "url": "https://apis-dev-rtfint.aceclublab.com/dev-s-retrieve-tso-credentials/v1/retrieveTSOCredentials", "headers": { "client_id": "218365067aca423e8301421e0643781b", "AUTHORIZATION": "Bearer {{accessToken}}", "client_secret": "F052D7037b0C498A889fD19f22A140Af" }, "body": null }, "setup": [], "validations": [ { "name": "Assert response code equals 200", "type": "assert_numeric", "actual": "{{response.code}}", "expected": "200", "comparator": "equals" } ] } ]
Below is the splunk query that i'm using. index=standard_tanium source="Prod-CSV" script=read.ps1" |rename field01 as user |rename field02 as login-count |rename CSV-Timestamp as Script_run |re... See more...
Below is the splunk query that i'm using. index=standard_tanium source="Prod-CSV" script=read.ps1" |rename field01 as user |rename field02 as login-count |rename CSV-Timestamp as Script_run |rex mode=sed field=user "/[&?].*//g" |table Script_run IDBC-Hostname user login-count |lookup read-login.csv user-id as user OUTPUTNEW user-id |lookup owner.csv hostname as IDBC-Hostname OUTPUTNEW tier sownername servicename |search user-id=* |sort - login-count | fields Script_run IDBC-Hostname user login-count tier sownername servicename the above query is working fine, just duplicate values coming for tier sownername servicename Output : Script_run                IDBC-Hostname          user                          login-count     tier sownername servicename 2023/06/19 12:25    abc12345                 11133121-990           4                    2        Bob                     ADF-Co                                                                                                                                                      2       Bob                     ADF-Co                                                                                                                                                       2      Bob                     ADF-Co                                                                                                                                                        2     Bob                     ADF-Co 2023/06/15 17:25    xzyz1112                   33421111-990               2                    1    Sam                    AXF-Co                                                                                                                                                           1   Sam                     AXF-Co                                                                                                                                                            1  Sam                     AXF-Co                                                                                                                                                             1 Sam                      AXF-Co
Hi Everyone, I am trying to see if there is a query I can run that will tell me which of our password requirements a user is not meeting when trying to set their password.  I believe there is somet... See more...
Hi Everyone, I am trying to see if there is a query I can run that will tell me which of our password requirements a user is not meeting when trying to set their password.  I believe there is something I can run that will give me this info. Thank you very much in advance for any assistance.
Hello All, I need help to understand the cache related fields returned by _audit index for scheduled searches. duration_command_search_index_bucketcache_miss duration_command_search_index_bucketca... See more...
Hello All, I need help to understand the cache related fields returned by _audit index for scheduled searches. duration_command_search_index_bucketcache_miss duration_command_search_index_bucketcache_hit duration_command_search_rawdata_bucketcache_hit duration_command_search_rawdata_bucketcache_miss invocations_command_search_index_bucketcache_error invocations_command_search_index_bucketcache_hit invocations_command_search_index_bucketcache_miss invocations_command_search_rawdata_bucketcache_error invocations_command_search_rawdata_bucketcache_hit invocations_command_search_rawdata_bucketcache_miss Any information would be very helpful. Thank you Taruchit
would there ever be a scenario where its acceptable to have enabled alerts and or reports running which are not assigned to anybody ie owner = Nobody
Hello All, I have created the following search in splunk   index=* namespace=* |rex "Executing http:\/\/(?<rval>\w+.*)" |eval rvl_status=case(rval=="se","E_Successful",rval=="vt","F_Successful... See more...
Hello All, I have created the following search in splunk   index=* namespace=* |rex "Executing http:\/\/(?<rval>\w+.*)" |eval rvl_status=case(rval=="se","E_Successful",rval=="vt","F_Successful") |stats count by rvl_status   The E_Successful and F_Successful count will always be 1 for a date This works perfectly. But what I am seeking is to add another case condition, when no events are returned and call it No_Success and increase its count either by 1 or 2 or depending the count value of E_Successful or F_Successful i.e. No_Success count will be 1 if either E_Successful or F_Successful count is 0 or No_Success count will be 2 if E_Successful & F_Successful counts are 0. I have tried various things, but I am unable to get the desired results. Could someone assist me please?