All Topics

Top

All Topics

Hello Team, hope you are doing well,   - How looks the retention configuration  for 2 years (1 year searchable and 1 year archived) in linux instance. and how it works? (each year has its confi... See more...
Hello Team, hope you are doing well,   - How looks the retention configuration  for 2 years (1 year searchable and 1 year archived) in linux instance. and how it works? (each year has its configuration, how this works). - What are the paths and  instances where those configurations are stored/saved in linux instance. (CLI)? - What link may I use to learn more about retention? Thank you in advance.
Hi everyone, We need quick help overriding ACL for an endpoint from our add-on application. We are making a POST request to endpoint:https://127.0.0.1:8089/servicesNS/nobody/{app_name}/configs/conf-... See more...
Hi everyone, We need quick help overriding ACL for an endpoint from our add-on application. We are making a POST request to endpoint:https://127.0.0.1:8089/servicesNS/nobody/{app_name}/configs/conf-{file_name}, modifying configuration files, but it gives the error: "do not have permission to perform this operation (requires capability: admin_all_objects)".  How do we override this endpoint to use a different capability/role?
Hi,  I installed SA_CIM_Vladiator and when running % checks to see DM coverage I do see gaps between extracted fields or fields that are found on specific indexes and the app does not return them in... See more...
Hi,  I installed SA_CIM_Vladiator and when running % checks to see DM coverage I do see gaps between extracted fields or fields that are found on specific indexes and the app does not return them in the results        
Hello, Do anyone have a quick howto on using this application. With examples?
hello I need to display 2 curves in my line chart from two different index so i am doing this : index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S"... See more...
hello I need to display 2 curves in my line chart from two different index so i am doing this : index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | stats avg(Value) as Disque by time | eval Disque=round(Disque, 2) | append [ search index="mem" sourcetype="Perfmon:mem" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | stats avg(Value) as Mémoire by time | eval Mémoire=round(Mémoire, 2)] the problem I have is that on the x axis my curves are not aligned on the same time slot what is wrong please? thanks
How to download and install a trial version of Splunk SOAR and MITRE Framework?
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6... See more...
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6 00:50:31 2024","Customer": "xxxxxx","SessionID": "xxxxxx","SessionType": "TTN_ASSISTANT_BROKER_STATS","SessionStatus": "TT_STATUS_AUTHENTICATED","Version": "","Platform": "","XXX": "XX-X-9888","Connector": "XXXXXXXX","ConnectorGroup": "XXX XXX XXXXXX GROUP","PrivateIP": "","PublicIP": "18.24.9.8","Latitude": 0.000000,"Longitude": 0.000000,"CountryCode": "","TimestampAuthentication": "2024-01-28T09:26:31.592Z","TimestampUnAuthentication": "","CPUUtilization": 0,"MemUtilization": 0,"ServiceCount": 0,"InterfaceDefRoute": "","DefRouteGW": "","PrimaryDNSResolver": "","HostStartTime": "0","ConnectorStartTime": "0","NumOfInterfaces": 0,"BytesRxInterface": 0,"PacketsRxInterface": 0,"ErrorsRxInterface": 0,"DiscardsRxInterface": 0,"BytesTxInterface": 0,"PacketsTxInterface": 0,"ErrorsTxInterface": 0,"DiscardsTxInterface": 0,"TotalBytesRx": 19162399,"TotalBytesTx": 16432931,"MicroTenantID": "0"}   Can you help me?  Can this line be removed using the forwarder from the props files? Regards, 
I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fi... See more...
I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fields within Splunk. However, I'm encountering a challenge where some arguments do not have an associated value, and for these cases, I would like to set their values to `true`. Here's the SPL I'm using for extraction: | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" What I need is to refine this SPL so that after extraction, any argument without a value is automatically assigned a value of `true`. After setting the default values, I would then like to use `mvexpand` to separate each argument-value pair into its own event. Could you provide guidance on how to adjust my regex or SPL command to accomplish this within Splunk?
Will this add-on integrate with devices managed in Aruba Central as well?
I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this look... See more...
I have a lookup file . It has 2 columns : Service and Entity and 500+ rows. Service has 34 unique values and Entity has 164.  I have a dashboard where for search i want to use values from this lookup as input to search criteria. I have following logic .I get the dropdown values for "service" without any issues but not for "entity" when it's same lookup file ,same logic.   Any ideas ? Snippet : <input type="dropdown" token="Service" searchWhenChanged="true"> <label>Service</label> <search> <query> |inputlookup metadata.csv | dedup service | stats dc(service) by service </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="Entity" searchWhenChanged="true"> <label>Entity</label> <search> <query> |inputlookup metadata.csv | dedup entity | stats dc(entity) by entity </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input>  
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stat... See more...
We have a splunk query that pulls down a list of values daily.  We are looking to see if we can use splunk to find the field value that is new today, but was not present yesterday, and show in a stats table. How can this be accomplished?  The idea is.. Yesterday - splunk db connect query pulls back a result of 5 log lines, all containing the field "name". field= name values - Bob, Kat, Abe, Doug, Sam Today - splunk db connect query pulls back a result of 6 log lines, all containing field "name". field= name values - Bob, Kat, Abe, Doug, Sam, Jim(new value found) So would like to show a stats table or alert that would let us know "Jim" is a new field value for name that did not exist yesterday.    
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh Privat... See more...
Hi I am trying to divide the the logs into different evwnt based on below scenario: I have one single event currently: Issuer : hjlhjk a: xyz  PrivateKey : abc Issuer : dfjh a: fhfh PrivateKey : dsgd   Now I want it as two events:   event1: Issuer : hjlhjk a: xyz  PrivateKey : abc   event2: Issuer : dfjh a: fhfh PrivateKey : dsgd   how can i get this?     I tried below line breaking which is not working [sourcetype] LINE_BREAKER = ([\r\n]+)(PrivateKey)   [sourcetype] BREAK_ONLY_BEFORE = Issuer SHOULD_LINEMERGE = false  
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by ... See more...
index=xxxx source=*xxxxxx* | eval respStatus=case(responseStatus>=500, "ERRORS", responseStatus>=400, "EXCEPTIONS", responseStatus>=200, "SUCCESS" ) | stats avg(responseTime), max(responseTime) by client_id, servicePath, respStatus The above query gives me the output as : I want to bring the respStatus column to split in 3 columns and should looks something like this:     Want my table in this format :  clientID | Service Path | Success count | Error Count | Exception Count | Avg Resp time | Max Resp time
Splunk Edge Processor offers more efficient, flexible data transformation – helping you reduce noise, control costs, and gain visibility and control over your data in motion. It works at the edge of ... See more...
Splunk Edge Processor offers more efficient, flexible data transformation – helping you reduce noise, control costs, and gain visibility and control over your data in motion. It works at the edge of your network and is included with your Splunk Cloud Platform, available at no additional cost. Learn more about the Edge Processor solution, including resources to get started.  With Edge Processor, you can: Filter low-value or noisy data, like DEBUG logs.  Enrich and extract only the critical data. Route different “slices” of data to Splunk platform and Amazon S3. Edge Processor pipelines use SPL2 to define the logic for filtering, masking, and transforming data before routing it to supported destinations. SPL2 lets you use one common language to both search and transform your data. This gives you the new flexibility to filter out parts of the event itself, in addition to the entire event. Today, Splunk Edge Processor can receive data from many sources, including Universal Forwarders, HTTP Event Collector, syslog and more. Use Case Prerequisites  Before you can implement use cases with Edge Processor, make sure you have: Connected your Edge Processor tenant to your Splunk Cloud Platform deployment via the first-time setup instructions.  Created an Edge Processor instance by following the steps under “configure and deploy Edge Processor”. Splunk Edge Processor Common Use Cases  The links below walk you through common use cases that Splunk Edge Processor can address. These can help you reduce ingest volume to optimize costs around data storage and transfer, protect sensitive information, and significantly improve your time to value.  Filter and Route Data Reduce and route logs for cost-effective storage [Blog] Step-by-step guidance to reduce substantial volumes of ingested logs and route them to Amazon S3 for cost-effective storage.   Filter Kubernetes data over HTTP Event Collector (HEC) [Video] This video walks you through how to build a pipeline to filter noisy events from Kubernetes pods using the HTTP Event Collector (HEC).  Reduce security firewall logs (PAN and Cisco) with Splunk Edge Processor [Lantern] Are you swamped by the relentless surge of log data from your Palo Alto Networks (PAN) and Cisco devices? Follow this step-by-step guidance to reduce your firewall logs with Edge Processor. You can also watch the demo video walkthrough or read the blog for more context.  Filter verbose data sources and transform content for Windows system events [Blog] Scroll down this blog to see how to filter verbose data sources, such as Windows event logs, and to retain selected events or content within an event. Then route an unfiltered copy to AWS S3 bucket. Transform, Mask, and Route Data Enrich data via real-time threat detection with KV Store lookups [Lantern] By creating and applying a pipeline that uses a lookup, you can configure an Edge Processor to add more information to the received data before sending that data to a destination (docs). In this case, our objective is to use the event fields present in your ingested data to preemptively identify and flag malicious activity.  Modify raw events to remove fields and reduce storage [Video] Remove unwanted fields from a raw event and reconstruct it with a reduced number of fields to optimize storage in the Splunk platform. Similar logic can be used to drop as many fields as desired to reduce your storage footprint and improve performance. Convert complex data into metrics with Edge Processor [Lantern] This step-by-step guide walks you through how to transform complex bloated data into metrics by pre-processing your data with Edge Processor so you can cut storage costs. For a simplified version of this process, see Converting logs into metrics with Edge Processor for beginners. Route root user events to a special index [Lantern] This use case provides step-by-step guidance to filter any events relating to the “root” user in your Linux authentication data and send them to an index they’ve created for you called admin. Mask sensitive credit card information [Video] Masking logic can be applied on credit card information to extract the card number field and replace the value with a string of your choosing, ensuring that the data remains secure and your business complies with data privacy regulations. Mask IP addresses from a specific range [Lantern] There are multiple ways of achieving this IP masking use case with SPL2, depending on how flexible you want your pipeline to be. This article looks at two possible methods 1) using eval replace and 2) using rex and cidrmatch. Additional Resources  Check out these additional resources to learn more and get started using Edge Processor:  Edge Processor Resource Hub Lantern step-by-step ‘Getting Started’ guide Requirements to use Edge Processor and how to request access if you do not already have it Splunk Edge Processor release notes and documentation
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Eve... See more...
I've been working to recreate a query in Splunk from Microsoft Defender Endpoint that shows what files users have copied to USB Drives. The query works like this: Step 1: Get all file USB Mount Events Step 2: Get all file creation events on drives that are not C.  Step 3: Join the above two datasources by Device ID.  Step 4: Match drive letters and make sure the USB Mount time is less than File Create time.  Here's Microsoft's query: Microsoft-365-Defender-Hunting-Queries/Exfiltration/Files copied to USB drives.md at master · microsoft/Microsoft-365-Defender-Hunting-Queries · GitHub In Splunk I get to step three and then I'm not able to filter values based on that. Below is my query so far. Any suggestions would be helpful.  index=atp category="AdvancedHunting-DeviceFileEvents" properties.InitiatingProcessAccountName!="system" properties.ActionType="FileCreated" properties.FolderPath!="C:\\*" properties.FolderPath!="\\*" | fields properties.ReportId, properties.DeviceId, properties.InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn, properties.FileName, properties.FolderPath, properties.SHA256, properties.Timestamp, properties.SensitivityLabel, properties.IsAzureInfoProtectionApplied | rename properties.ReportId as ReportId, properties.DeviceId as DeviceId, properties.InitiatingProcessAccountDomain as InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName as InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn as InitiatingProcessAccountUpn, properties.FileName as FileName, properties.FolderPath as FolderPath, properties.SHA256 as SHA256, properties.Timestamp as Timestamp, properties.SensitivityLabel as SensitivityLabel, properties.IsAzureInfoProtectionApplied as IsAzureInfoProtectionApplied | eval Timestamp_epoch = strptime (Timestamp, "%Y-%m-%dT%H:%M:%S.%6N%Z") | sort DeviceId, Timestamp desc | join type=inner left=L right=R where L.DeviceId = R.DeviceId [search index=atp category="AdvancedHunting-DeviceEvents" properties.ActionType="UsbDriveMounted" | spath input=properties.AdditionalFields | fields properties.DeviceId, properties.DeviceName, DriveLetter, properties.Timestamp, ProductName, SerialNumber, Manufacturer | sort properties.DeviceId, properties.Timestamp desc | rename properties.DeviceId as DeviceId, properties.DeviceName as DeviceName, properties.Timestamp as MountTime | eval MountTime_epoch = strptime (MountTime, "%Y-%m-%dT%H:%M:%S.%6N%Z") ] | table L.FolderPath,R.DriveLetter, R.MountTime, R.MountTime_epoch, L.Timestamp, L.Timestamp_epoch      
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike ... See more...
Hello, Where does Splunk get the data from CrowdStrike to form the Splunk drilldown dashboards under Detections and Events called "CrowdStrike Detections Allowed/Blocked Breakdown" and "CrowdStrike Events Allowed/Blocked Breakdown"? My confusion is that in CrowdStrike Falcon console I don't see the terms "Blocked/Allowed" being used for detections or events and I need to know how Splunk is correlating those drilldown dashboard sections to CrowdStrike? What data does Splunk use from CrowdStrike to create those Blocked/Allowed sections in Splunk?
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction.... See more...
Hey Everyone! We just started using Splunk ES, we just got it up and running fairly well and I have a couple questions hopefully I could get some guidance on or maybe a point in the right direction. I would like to somehow setup the ability for analyst to be able to run local scripts in the adaptive response that use dynamic user input as variables to query external APIs. Another scenario, I was hoping we could use, would be using specific tokens/fields as the dynamic variable for these scripts and just give the analyst the output in the adaptive response when they are ran. Are any of these scenarios possible with ES we have tried to find a way to do this but so far have not come up with any successful implementation. Is there any documentation on implementing something like this? Any help would be very much appreciated!
Hi, I have two splunk search -1, search-2 i have to create splunk alert for search-2 based on search-1. If search-1 count greater than 0 then trigger search-2 alert   regards vch
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the ... See more...
Hello, How to click a button or a link to run search and download CSV file in Dashboard Studio? At this time, I have to click magnifying glass to open a search, then click "export" to download the CSV file. I don't have access to REST API or Splunk Developer. Please suggest. Thank you for your help
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new in... See more...
Good afternoon I hva e splunk srchitecture: 1 seach  2 indexers in cluster 1 master node/License Server 1 Moniotoring Console/Deploymen server 2 Heavy forwarders SF=2 RF=2 I added a new indexer to cluster, after that  tryed to change the RF and SF, both to 3, but when i change the values from splunk web in the master node and restart the instance, th aplatform show me the nex message:     then, I did rollabck, return SF=2 and RF=2, and evetrything normal, but the bucket status shows I need to change the SF and RF and I need to know if this will fix the iisues with the indexes Regards