All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to collapse a of data into earliest/lastest by _time,  with the time is contiguous.  Such as:  2022-08-27 07:36:00 2022-08-27 07:37:00 2022-08-27 07:38:00 2022-08-27 07:39:00 2022-... See more...
I'm trying to collapse a of data into earliest/lastest by _time,  with the time is contiguous.  Such as:  2022-08-27 07:36:00 2022-08-27 07:37:00 2022-08-27 07:38:00 2022-08-27 07:39:00 2022-08-27 07:40:00 2022-08-27 07:44:00 2022-08-27 07:45:00 2022-08-27 07:46:00 2022-08-27 08:31:00 2022-08-27 08:32:00 2022-08-27 08:33:00 2022-08-27 08:34:00 2022-08-27 08:35:00 earliest:                               latest: 2022-08-27 07:36:00   2022-08-27 07:40:00 2022-08-27 07:44:00   2022-08-27 07:46:00 2022-08-27 08:31:00   2022-08-27 08:35:00 THoughts? 
Hi, how can I combine two fields (2.1 and 2.2) into one field (Main calculation) I have a table :    I would like to convert it into something like this :   where START_TIME is the value o... See more...
Hi, how can I combine two fields (2.1 and 2.2) into one field (Main calculation) I have a table :    I would like to convert it into something like this :   where START_TIME is the value of 2.1  and the FINISH_TIME is the value of 2.2 field. Completion_time is sum of both two fields  (0.35 + 60.53)   SPL query:   | eval finish_time_epoch = strftime(strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval start_time_epoch = strftime(strptime(START_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval duration_s = strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S") - strptime(START_TIME, "%Y-%m-%d %H:%M:%S") | eval duration_min = round(duration_s / 60, 2) | rename duration_min AS Completion_time   | eval Process=if(Process="013","2.1 Main calculation",Process) | eval Process=if(Process="014","2.2 Main calculation",Process) | table Process,  2.START_TIME, 3.FINISH_TIME , 4.Completion_time | sort -START_TIME, -FINISH_TIME | sort +Process | transpose 0 header_field=Process column_name=Process | dedup Process  
Getting the following error "could not load lookup = lookup-severity_for_fireeye" after searching the query  tried checking the lookups table and definitions and also automatic lookup at the field n... See more...
Getting the following error "could not load lookup = lookup-severity_for_fireeye" after searching the query  tried checking the lookups table and definitions and also automatic lookup at the field name it is taking in the search and the lookup table is different what can I do to rectify there error
I have 60*24 =1440 records in my saved search, which mean every minutes have 1 record. |bin _time span=1m some things like these in _time column 8/26/2022 11:30:40am 8/26/2022 11:30:41am I co... See more...
I have 60*24 =1440 records in my saved search, which mean every minutes have 1 record. |bin _time span=1m some things like these in _time column 8/26/2022 11:30:40am 8/26/2022 11:30:41am I connected my splunk data to tableau, but all the datetime in 1440 records changed to 8/26/2022 12:00:00am if i use extract mode in tableau, does anyone face the same issue before?
New to Splunk cloud and EC2 universal forwarder install -   I am reading that the Cloud Universal Forwarder on Linux needs a credential file ? where is the credential file  location ? Or is it ... See more...
New to Splunk cloud and EC2 universal forwarder install -   I am reading that the Cloud Universal Forwarder on Linux needs a credential file ? where is the credential file  location ? Or is it just the same on Splunk on premise Universal Forwarder installs ? The more I research the more I get confused.  
Hello We are trying to integrate xsoar cortex with splunk cloud following the manufacturer's document, but it informs that when integrating with splunk cloud it is necessary to request an Access A... See more...
Hello We are trying to integrate xsoar cortex with splunk cloud following the manufacturer's document, but it informs that when integrating with splunk cloud it is necessary to request an Access Api for support, and we also need the IP, as shown in the images below. Is it possible to help us with this? In attachment, follow the screen Configuration. The support passed this link to follow: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Config/ACSIntro , but it is not working, could someone help please?   Thank you.  
Does Splunk ever plan on updating the java api to mirror the python splunk-sdk? The java library is way behind the python library when it comes to custom search.  As far as I can tell you cannot even... See more...
Does Splunk ever plan on updating the java api to mirror the python splunk-sdk? The java library is way behind the python library when it comes to custom search.  As far as I can tell you cannot even create custom searches with java for Splunk.   1. Splunk could more easily integrate with a variety of apache tools.  2. You would get a performance boost in for applications that really should not be built in java. 3.  Splunk would open itself up to a larger segment of the developer community.
We are trying to use our internal S3-compliant object store with SmartStore, but our access key and secret key expires and rotates every day, does anyone know how to handle the key rotation? I know t... See more...
We are trying to use our internal S3-compliant object store with SmartStore, but our access key and secret key expires and rotates every day, does anyone know how to handle the key rotation? I know these keys are set in the indexes.conf file, we should be able to write a script to update it, but does it require splunk restart? Any insights will be greatly appreciated.
Hi, I have a field with timestamp value "2017-09-21T20:00:00" in format. I need to convert it to the  date and time with time zone  For example, Thu Jul 18 09:30:00 PDT 2022 please do help thanks 
I have a simple .csv I ingest daily via a monitored file, my .csv has some fields in it that show dates/time, but they do NOT represent the time I want the event indexed at.  I want the _time to sho... See more...
I have a simple .csv I ingest daily via a monitored file, my .csv has some fields in it that show dates/time, but they do NOT represent the time I want the event indexed at.  I want the _time to show the time the .csv field was ingested and for Splunk to ignore the other fields in the .csv which have dates/time present.  I have created a new source type by cloning .csv and set the timestamp to use "current time", however, Splunk will still prefer to use random dates/times found in field values and only use "current time" when no fields contain any other time information. I can "fix" this by manually adding a time field in the .csv before ingesting, but I am trying to automate this process as much as possible. Is there a way I can force Splunk to ignore all date/time values found in a .csv and use ingest time for the _time value? Thank you in advance!
Hello!   We have some logs coming across which are in JSON and thus 'just work'. The problem is, inside the log field are the events we need to extract.  There are about 200 app's that will be log... See more...
Hello!   We have some logs coming across which are in JSON and thus 'just work'. The problem is, inside the log field are the events we need to extract.  There are about 200 app's that will be logging this way and each app will have different fields and values so doing a standard field extract won't work or would be 1000's of potential kvps. The format however is always the same.     { "log": "[2022-08-25 18:54:40.031] INFO JsonLogger [[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5] [event: 25670349-e6b5-4996-9cb6-c4c9657cd9ba]: {\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"message\" : \"MESSAGE_HERE\",\n \"tracePoint\" : \"END\",\n \"priority\" : \"INFO\",\n \"elapsed\" : 0,\n \"locationInfo\" : {\n \"lineInFile\" : \"95\",\n \"component\" : \"json-logger:logger\",\n \"fileName\" : \"buildLoggingAndResponse.xml\",\n \"rootContainer\" : \"responseStatus_success\"\n },\n \"timestamp\" : \"2022-08-25T18:54:40.030Z\",\n \"content\" : {\n \"ResponseStatus\" : {\n \"type\" : \"SUCCESS\",\n \"title\" : \"Encryption successful.\",\n \"status\" : \"200\",\n \"detail\" : { },\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"apiMethodName\" : null,\n \"apiURL\" : \"https://app.com/prd-ops-mulesoft/encrypt/v1.0\",\n \"apiVersion\" : \"v1\",\n \"x-ConsumerRequestSentTimeStamp\" : \"\",\n \"apiRequestReceivedTimeStamp\" : \"2022-08-25T18:54:39.856Z\",\n \"apiResponseSentTimeStamp\" : \"2022-08-25T18:54:40.031Z\",\n \"userId\" : \"GID01350\",\n \"orchestrations\" : [ ]\n }\n },\n \"applicationName\" : \"ops-mulesoft\",\n \"applicationVersion\" : \"v1\",\n \"environment\" : \"PRD\",\n \"threadName\" : \"[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5\"\n}\n", "stream": "stdout", "time": "2022-08-25T18:54:40.086450071Z", "kubernetes": { "pod_name": "prd-ops-mulesoft-94c49bdff-pcb5n", "namespace_name": "4cfa0f08-92b0-467b-9ca4-9e49083fd922", "pod_id": "e9046b5e-0d70-11ed-9db5-0050569b19f6", "labels": { "am-org-id": "01a4664d-9e16-454b-a14c-59548ef896b5", "app": "prd-ops-mulesoft", "environment": "4cfa0f08-92b0-467b-9ca4-9e49083fd922", "master-org-id": "01a4664d-9e16-454b-a14c-59548ef896b5", "organization": "01a4664d-9e16-454b-a14c-59548ef896b5", "pod-template-hash": "94c49bdff", "rtf.mulesoft.com/generation": "aab6b8074cf73151b1515de0e468478e", "rtf.mulesoft.com/id": "18d3e5d6-ce59-4837-9f3b-8aad3ccffcef", "type": "MuleApplication" }, "host": "1.1.1.1", "container_name": "app", "docker_id": "cf07f321aec551b200fb3f31f6f1c67b2678ff6f6a335d4ca41ec2565770513c", "container_hash": "rtf-runtime-registry.kprod.msap.io/mulesoft/poseidon-runtime-4.3.0@sha256:6cfeb965e0ff7671778bc53a54a05d8180d4522f0b1ef7bb25e674686b8c3b75", "container_image": "rtf-runtime-registry.kprod.msap.io/mulesoft/poseidon-runtime-4.3.0:20211222-2" } }       The JSON works fine, but the events we ALSO want extracted are in here      {\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"message\" : \"MESSAGE_HERE\",\n \"tracePoint\" : \"END\",\n \"priority\" : \"INFO\",\n \"elapsed\" : 0,\n \"locationInfo\" : {\n \"lineInFile\" : \"95\",\n \"component\" : \"json-logger:logger\",\n \"fileName\" : \"buildLoggingAndResponse.xml\",\n \"rootContainer\" : \"responseStatus_success\"\n },\n \"timestamp\" : \"2022-08-25T18:54:40.030Z\",\n \"content\" : {\n \"ResponseStatus\" : {\n \"type\" : \"SUCCESS\",\n \"title\" : \"Encryption successful.\",\n \"status\" : \"200\",\n \"detail\" : { },\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"apiMethodName\" : null,\n \"apiURL\" : \"https://app.com/prd-ops-mulesoft/encrypt/v1.0\",\n \"apiVersion\" : \"v1\",\n \"x-ConsumerRequestSentTimeStamp\" : \"\",\n \"apiRequestReceivedTimeStamp\" : \"2022-08-25T18:54:39.856Z\",\n \"apiResponseSentTimeStamp\" : \"2022-08-25T18:54:40.031Z\",\n \"userId\" : \"GID01350\",\n \"orchestrations\" : [ ]\n }\n },\n \"applicationName\" : \"ops-mulesoft\",\n \"applicationVersion\" : \"v1\",\n \"environment\" : \"PRD\",\n \"threadName\" : \"[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5\"\n}     SPATH would work, but the JSON is fronted by this     "[2022-08-25 18:54:40.031] INFO JsonLogger [[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5] [event: 25670349-e6b5-4996-9cb6-c4c9657cd9ba]:     So it doesn't treat it as JSON. This is one example, others events don't have the correlationID, etc.  We need a method that will take the raw data, parse it as JSON AND then dynamically extract the events in the log field in their KV pairs (e.g. \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\" == $1::$2) Can this be done in transforms using regex? Is this even possible, or do we ultimately need to create extractions based on every possible field?   Appreciate the guidance here! Thanks!!
We are currently tasked at having Splunk monitor an AKS in Azure, and comparing two solutions: - Installing Splunk Connect for Kubernetes in AKS, as per this thread: We are thinking of moving to Az... See more...
We are currently tasked at having Splunk monitor an AKS in Azure, and comparing two solutions: - Installing Splunk Connect for Kubernetes in AKS, as per this thread: We are thinking of moving to Azure Kontainer Servi... - Splunk Community - Another pattern that was done before is to enable Azure Monitor, which in turn ships logs to Event Hub and eventually consumed by Splunk via the Splunk Addon for Microsoft Cloud Services. How does the two solutions compare and what is the preferred solution? 
We have several devices that perform endpoint and network device scanning.  As intended, they are scanning prohibited ports to verify they are not open, however the ESCU correlation searches , specif... See more...
We have several devices that perform endpoint and network device scanning.  As intended, they are scanning prohibited ports to verify they are not open, however the ESCU correlation searches , specifically the "Prohibited network Traffic Allowed" rule, is detecting thousands of these events each day. How can I prevent notable events from being created in Enterprise Security when the source is one of the scanning devices? Thank you.
I'm trying to create a playbook that uses the Windows Remote Management app to take a file saved locally on a server and move it to a location on a network share. I've tried using different command a... See more...
I'm trying to create a playbook that uses the Windows Remote Management app to take a file saved locally on a server and move it to a location on a network share. I've tried using different command and Powershell options and the WRM app's built-in action 'copy-item' and none of them work.  I can run these commands and scripts locally on the server logged-in as the user that would be performing these actions through SOAR and everything works fine. I can also have SOAR move the file from a local folder to another local folder and everything works fine. It's only when I ask SOAR to move it to a network share it will not work. Examples of what I'm doing:      Move-Item -Path C:\folder\file.txt -Destination \\servername\sharename  This script will work fine locally, but will not through SOAR.    Move-Item -Path C:\folder\file.txt -Destination C:\differentfolder\file.txt This script will work fine both locally and through SOAR. I've tried mapping the drive so I can use M:\file.txt and it still fails. I've asked SOAR to run the commands directly and also have tried letting SOAR run a script that uses these commands and it will not work. It doesn't seem to be a permission issue since I'm able to do all of this locally.  I'm lost at what else I can try or what else to look for as possible issues. Thanks for any help.
I recently have taken my splunk core use
I have a message thread, these messages are coming on splunk. The chain consists of ten different messages: five messages from one system, five messages from another (backup) system. Messages from... See more...
I have a message thread, these messages are coming on splunk. The chain consists of ten different messages: five messages from one system, five messages from another (backup) system. Messages from the primary system use the same SrcMsgId value, and messages from the backup system are combined with a common SrcMsgId. Messages from the standby system also have a Mainsys_srcMsgId value - this value is identical to the main system's SrcMsgId value. The message chain from the backup system enters the splunk immediately after the messages from the main system. Tell me how can I display a chain of all ten messages? Perhaps first messages from the first system (main), then from the second (backup) with the display of the time of arrival at the server. With time, I understand, I will include _time in the request. I got a little familiar with the syntax of queries, but still I still have a lot of difficulties with creating queries. Please help me with an example of the correct request. Thank you in advance!
Good morning. We have been tracking a recent reduction in our log ingest rate. After a myriad of searching, it appears that the reduction in xml Win Event Logs occurred the same week that windows p... See more...
Good morning. We have been tracking a recent reduction in our log ingest rate. After a myriad of searching, it appears that the reduction in xml Win Event Logs occurred the same week that windows patching occurred in July of 2022. We are down by approximately 10%, maybe a little less than that. We have noted that the xml wineventlogs appears to be the only index affected. I'm concerned because this could indicate: Patching broke logging on the windows systems and we aren't getting everything we used to or should Patching made logging more efficient and we are getting the same or better/more data with less overall size Something else could be broken within Splunk itself and this is the only indication We opened an on-demand case and they found nothing wrong. We opened a support case and they told us what we could see for ourselves in the cloud monitoring console. We've continued to search and investigate, and our working theory is that patching affected the logging. We now need to know if it's a good thing (number 2) or a bad thing (number 1). My question is - has anyone else noticed a drop in xmlwineventlog volume over the last few months? Thanks in advance.
Is there an API available or some other SPL searchable way to find the Index Cluster replication factor?  I would like to create some dashboards and searches for monitoring our indexers and would lik... See more...
Is there an API available or some other SPL searchable way to find the Index Cluster replication factor?  I would like to create some dashboards and searches for monitoring our indexers and would like to be able to display replication factor.  I have been using  "/services/search/distributed/peers "  for some information but is there an API available that will tell me what the replication factor is?  This is going to be  "run anywhere" as it will be deployed to at least 5 separate environments so hard coding wont suffice.
Hi, SPlunkers,   I have a multiselect dropdown field in my splunk dashboard.    I want to select 2 options from it,  I noticed it's previewed as "value1 "  "value2",   since there is no resul... See more...
Hi, SPlunkers,   I have a multiselect dropdown field in my splunk dashboard.    I want to select 2 options from it,  I noticed it's previewed as "value1 "  "value2",   since there is no result returned. I assumed it worked as value1 AND values,  but I expected it works as value1  OR  value2. how to configure it?    Kevin
Hi,   I am running below query, however I am getting error saying relation "analytics_hca_change_indicator_event doesn't exist" even if table doesn't exist in any one of the schema | koogledime... See more...
Hi,   I am running below query, however I am getting error saying relation "analytics_hca_change_indicator_event doesn't exist" even if table doesn't exist in any one of the schema | koogledimen service=TenantPPASQuery action=AdhocQuery targetGroup="keng03-dev01-ins08-wfm19-dbs" app="Unknown_App/ppas_dheeraj_r9int" schema="_ALL_" query="select date(createdtm), count(*) from analytics_hca_change_indicator_event group by createdtm " | eval envstatus=if(like(scope, "%dev01%"), 1, 0)| eval wfmstatus=if(like(scope, "%wfm19%"), 1, 0) | where envstatus=1 and wfmstatus=1 | eval wfm_schemaname = mvindex(split(scope, "-"), -1).schemaname| eval wfm_schemaname = mvindex(split(scope, "-"), -1)."_".schema_name | chart sum(count) by date,wfm_schemaname   How to handle this scenario please?