All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello I need to display 2 curves in my line chart from two different index so i am doing this : index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S"... See more...
hello I need to display 2 curves in my line chart from two different index so i am doing this : index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | stats avg(Value) as Disque by time | eval Disque=round(Disque, 2) | append [ search index="mem" sourcetype="Perfmon:mem" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | stats avg(Value) as Mémoire by time | eval Mémoire=round(Mémoire, 2)] the problem I have is that on the x axis my curves are not aligned on the same time slot what is wrong please? thanks
How to download and install a trial version of Splunk SOAR and MITRE Framework?
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6... See more...
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6 00:50:31 2024","Customer": "xxxxxx","SessionID": "xxxxxx","SessionType": "TTN_ASSISTANT_BROKER_STATS","SessionStatus": "TT_STATUS_AUTHENTICATED","Version": "","Platform": "","XXX": "XX-X-9888","Connector": "XXXXXXXX","ConnectorGroup": "XXX XXX XXXXXX GROUP","PrivateIP": "","PublicIP": "18.24.9.8","Latitude": 0.000000,"Longitude": 0.000000,"CountryCode": "","TimestampAuthentication": "2024-01-28T09:26:31.592Z","TimestampUnAuthentication": "","CPUUtilization": 0,"MemUtilization": 0,"ServiceCount": 0,"InterfaceDefRoute": "","DefRouteGW": "","PrimaryDNSResolver": "","HostStartTime": "0","ConnectorStartTime": "0","NumOfInterfaces": 0,"BytesRxInterface": 0,"PacketsRxInterface": 0,"ErrorsRxInterface": 0,"DiscardsRxInterface": 0,"BytesTxInterface": 0,"PacketsTxInterface": 0,"ErrorsTxInterface": 0,"DiscardsTxInterface": 0,"TotalBytesRx": 19162399,"TotalBytesTx": 16432931,"MicroTenantID": "0"}   Can you help me?  Can this line be removed using the forwarder from the props files? Regards, 
I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fi... See more...
I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fields within Splunk. However, I'm encountering a challenge where some arguments do not have an associated value, and for these cases, I would like to set their values to `true`. Here's the SPL I'm using for extraction: | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" What I need is to refine this SPL so that after extraction, any argument without a value is automatically assigned a value of `true`. After setting the default values, I would then like to use `mvexpand` to separate each argument-value pair into its own event. Could you provide guidance on how to adjust my regex or SPL command to accomplish this within Splunk?
Will this add-on integrate with devices managed in Aruba Central as well?
I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this look... See more...
I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this lookup as input to search criteria. I have following logic .I get the dropdown values for "service" without any issues but not for "entity" when it's same lookup file ,same logic.   Any ideas ? Snippet : <input type="dropdown" token="Service" searchWhenChanged="true"> <label>Service</label> <search> <query> |inputlookup metadata.csv | dedup service | stats dc(service) by service </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="Entity" searchWhenChanged="true"> <label>Entity</label> <search> <query> |inputlookup metadata.csv | dedup entity | stats dc(entity) by entity </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input>  
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stat... See more...
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stats table. How can this be accomplished?  The idea is.. Yesterday - splunk db connect query pulls back a result of 5 log lines, all containing the field "name". field= name values - Bob, Kat, Abe, Doug, Sam Today - splunk db connect query pulls back a result of 6 log lines, all containing field "name". field= name values - Bob, Kat, Abe, Doug, Sam, Jim(new value found) So would like to show a stats table or alert that would let us know "Jim" is a new field value for name that did not exist yesterday.    
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh Privat... See more...
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh PrivateKey : dsgd   Now I want it as two events:   event1: Issuer : hjlhjk a: xyz  PrivateKey : abc   event2: Issuer : dfjh a: fhfh PrivateKey : dsgd   how can i get this?     I tried below line breaking which is not working [sourcetype] LINE_BREAKER = ([\r\n]+)(PrivateKey)   [sourcetype] BREAK_ONLY_BEFORE = Issuer SHOULD_LINEMERGE = false  
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by ... See more...
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by client_id, servicePath, respStatus The above query gives me the output as : I want to bring the respStatus column to split in 3 columns and should looks something like this:     Want my table in this format :  clientID | Service Path | Success count | Error Count | Exception Count | Avg Resp time | Max Resp time
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Eve... See more...
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Events Step 2: Get all file creation events on drives that are not C.  Step 3: Join the above two datasources by Device ID.  Step 4: Match drive letters and make sure the USB Mount time is less than File Create time.  Here's Microsoft's query: Microsoft-365-Defender-Hunting-Queries/Exfiltration/Files copied to USB drives.md at master · microsoft/Microsoft-365-Defender-Hunting-Queries · GitHub In Splunk I get to step three and then I'm not able to filter values based on that. Below is my query so far. Any suggestions would be helpful.  index=atp category="AdvancedHunting-DeviceFileEvents" properties.InitiatingProcessAccountName!="system" properties.ActionType="FileCreated" properties.FolderPath!="C:\\*" properties.FolderPath!="\\*" | fields properties.ReportId, properties.DeviceId, properties.InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn, properties.FileName, properties.FolderPath, properties.SHA256, properties.Timestamp, properties.SensitivityLabel, properties.IsAzureInfoProtectionApplied | rename properties.ReportId as ReportId, properties.DeviceId as DeviceId, properties.InitiatingProcessAccountDomain as InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName as InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn as InitiatingProcessAccountUpn, properties.FileName as FileName, properties.FolderPath as FolderPath, properties.SHA256 as SHA256, properties.Timestamp as Timestamp, properties.SensitivityLabel as SensitivityLabel, properties.IsAzureInfoProtectionApplied as IsAzureInfoProtectionApplied | eval Timestamp_epoch = strptime (Timestamp, "%Y-%m-%dT%H:%M:%S.%6N%Z") | sort DeviceId, Timestamp desc | join type=inner left=L right=R where L.DeviceId = R.DeviceId [search index=atp category="AdvancedHunting-DeviceEvents" properties.ActionType="UsbDriveMounted" | spath input=properties.AdditionalFields | fields properties.DeviceId, properties.DeviceName, DriveLetter, properties.Timestamp, ProductName, SerialNumber, Manufacturer | sort properties.DeviceId, properties.Timestamp desc | rename properties.DeviceId as DeviceId, properties.DeviceName as DeviceName, properties.Timestamp as MountTime | eval MountTime_epoch = strptime (MountTime, "%Y-%m-%dT%H:%M:%S.%6N%Z") ] | table L.FolderPath,R.DriveLetter, R.MountTime, R.MountTime_epoch, L.Timestamp, L.Timestamp_epoch      
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike ... See more...
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike Events Allowed/Blocked Breakdown"? My confusion is that in CrowdStrike Falcon console I don't see the terms "Blocked/Allowed" being used for detections or events and I need to know how Splunk is correlating those drilldown dashboard sections to CrowdStrike? What data does Splunk use from CrowdStrike to create those Blocked/Allowed sections in Splunk?
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction.... See more...
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction. I would like to somehow setup the ability for analyst to be able to run local scripts in the adaptive response that use dynamic user input as variables to query external APIs. Another scenario, I was hoping we could use, would be using specific tokens/fields as the dynamic variable for these scripts and just give the analyst the output in the adaptive response when they are ran. Are any of these scenarios possible with ES we have tried to find a way to do this but so far have not come up with any successful implementation. Is there any documentation on implementing something like this? Any help would be very much appreciated!
Hi, I have two splunk search -1, search-2 i have to create splunk alert for search-2 based on search-1. If search-1 count greater than 0 then trigger search-2 alert   regards vch
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the ... See more...
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the CSV file. I don't have access to REST API or Splunk Developer. Please suggest. Thank you for your help
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new in... See more...
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new indexer to cluster, after that  tryed to change the RF and SF, both to 3, but when i change the values from splunk web in the master node and restart the instance, th aplatform show me the nex message:     then, I did rollabck, return SF=2 and RF=2, and evetrything normal, but the bucket status shows I need to change the SF and RF and I need to know if this will fix the iisues with the indexes Regards  
Step to reproduce 1. Install version: '3.7' services: splunk: image: splunk/splunk:latest container_name: splunk ports: - "8000:8000" - "9997:9997" - "8088:8088" environment: - SPLUNK_STA... See more...
Step to reproduce 1. Install version: '3.7' services: splunk: image: splunk/splunk:latest container_name: splunk ports: - "8000:8000" - "9997:9997" - "8088:8088" environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD=Password1 volumes: - splunk_data_var:/opt/splunk/var - splunk_data_etc:/opt/splunk/etc restart: unless-stopped volumes: splunk_data_var: splunk_data_etc:   2. change admin pass from web ui   3. Restart splunk docker instance
Hello, The Splunk connect for Syslog add-on for Thycotic (Product section) shows information that is related to Tenable. See https://splunk.github.io/splunk-connect-for-syslog/1.96.4/sources/Thycoti... See more...
Hello, The Splunk connect for Syslog add-on for Thycotic (Product section) shows information that is related to Tenable. See https://splunk.github.io/splunk-connect-for-syslog/1.96.4/sources/Thycotic/. Please review and update the section.  Best, Pramod
Hi Team, Need to upgrade OS to RHEL 8 for our AppD controller. Currently, OS is at version Red Hat Enterprise Linux Server release 7.9 (Maipo). Please let us know if we could proceed to upgrade at ... See more...
Hi Team, Need to upgrade OS to RHEL 8 for our AppD controller. Currently, OS is at version Red Hat Enterprise Linux Server release 7.9 (Maipo). Please let us know if we could proceed to upgrade at this version AppD with RHEL 8. Thanks and Regards, Anand
L.s., We are at the beginning of a migration to Smartstore. We are reading/ following Migrate existing data on an indexer cluster to SmartStore - Splunk Documentation . One thing is worrying to us.... See more...
L.s., We are at the beginning of a migration to Smartstore. We are reading/ following Migrate existing data on an indexer cluster to SmartStore - Splunk Documentation . One thing is worrying to us.. bullet 7 in the "Run the migration on the indexer cluster". It says "Stop all the peer nodes."... not so nice on a production cluster. Is this really necessary? Isn't it possible to do ithis part with the deployment from the MC? So not all peers goes down at the same time (and losing data). Thanks in advance for the answer.   Jari  
Hi, I am trying to understand the best/cost effective approach to ingest logs from Azure AKS in Splunk Enterprise with Enterprise Security. The logs we have to collect are mainly for security purpo... See more...
Hi, I am trying to understand the best/cost effective approach to ingest logs from Azure AKS in Splunk Enterprise with Enterprise Security. The logs we have to collect are mainly for security purposes. Here the options I have found: Use the "Splunk OpenTelemetry Collector for Kubernetes" https://docs.splunk.com/Documentation/SVA/current/Architectures/OTelKubernetes Use Cloud facilities to export the logs to Storage Accounts Use Cloud facilities to export the logs to Event Hubs Use Cloud facilities to send syslog to a Log Analytics workspace https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-syslog   references: https://learn.microsoft.com/en-us/azure/azure-monitor/containers/monitor-kubernetes https://learn.microsoft.com/en-us/azure/aks/monitor-aks https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-data-export?tabs=portal https://learn.microsoft.com/en-us/azure/architecture/aws-professional/eks-to-aks/monitoring https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview   Is there a way to use Cloud facilities to stream the logs directly to Splunk so that we can avoid deploying the OTEL collector? Otherwise, if we must save the logs first to a Workspace/Storage Accounts/Event Hubs and export them with Splunk via API calls with "Splunk Add-on for Microsoft Cloud Services" or with "Microsoft Azure Add-on for Splunk", which is the best/cost effective approach? Thanks a lot, Edoardo