All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Bit of a novice here. I am in the process of planning to migrate a Splunk universal forwarder from one windows server to another.   To my understanding, this is the following process I hav... See more...
Hello, Bit of a novice here. I am in the process of planning to migrate a Splunk universal forwarder from one windows server to another.   To my understanding, this is the following process I have come up with: 1. Copy the Splunk home folder from the original  forwarder to the newly commissioned server. 2. Download the same version of Splunk. 3. Run the MSI executable, agree to the terms and conditions and open customise settings and select the install location as the same location as the pre-existing configuration.   Will the installer then prompt me for any other information, as it already has the configuration? For example will it ask me the deployment server address or the indexor address, or what system account is being used, or to create a splunk local administration account.   Will I need to change the host name in any configuration files? If it is not the same as the original server.  
Hi I have a tenable json logs, i wrote rex and trying to send the logs to null queue, howevene it is not going to nullqueue, Sample log is given below   { [-]  SC_address: X.xx.xx acceptRisk: f... See more...
Hi I have a tenable json logs, i wrote rex and trying to send the logs to null queue, howevene it is not going to nullqueue, Sample log is given below   { [-]  SC_address: X.xx.xx acceptRisk: false acceptRiskRuleComment: acrScore: assetExposureScore: baseScore: bid: checkType: summary cpe: custom_severity: false cve: cvssV3BaseScore: cvssV3TemporalScore: cvssV3Vector: cvssVector: description: This plugin displays, for each tested host, information about the scan itself : - The version of the plugin set. - The type of scanner (Nessus or Nessus Home). - The version of the Nessus Engine. - The port scanner(s) used. - The port range scanned. - The ping round trip time - Whether credentialed or third-party patch management checks are possible. - Whether the display of superseded patches is enabled - The date of the scan. - The duration of the scan. - The number of hosts scanned in parallel. - The number of checks done in parallel. dnsName: xxxx.xx.xx exploitAvailable: No exploitEase: exploitFrameworks: family: { [+] } firstSeen: X hasBeenMitigated: false hostUUID: hostUniqueness: repositoryID,ip,dnsName ip: x.x.x.x ips: x.x.x.x keyDrivers: lastSeen: x macAddress: netbiosName: x\x operatingSystem: Microsoft Windows Server X X X X patchPubDate: -1 pluginID: 19506 pluginInfo: 19506 (0/6) Nessus Scan Information pluginModDate: X pluginName: Nessus Scan Information pluginPubDate: xx pluginText: <plugin_output>Information about this scan : Nessus version : 10.8.3 Nessus build : 20010 Plugin feed version : XX Scanner edition used : X Scanner OS : X Scanner distribution : X-X-X Scan type : Normal Scan name : ABCSCAN Scan policy used : x-161b-x-x-x-x/Internal Scanner 02 - Scan Policy (Windows & Linux) Scanner IP : x.x.x.x Port scanner(s) : nessus_syn_scanner Port range : 1-5 Ping RTT : 14.438 ms Thorough tests : no Experimental tests : no Scan for Unpatched Vulnerabilities : no Plugin debugging enabled : no Paranoia level : 1 Report verbosity : 1 Safe checks : yes Optimize the test : yes Credentialed checks : no Patch management checks : None Display superseded patches : no (supersedence plugin did not launch) CGI scanning : disabled Web application tests : disabled Max hosts : 30 Max checks : 5 Recv timeout : 5 Backports : None Allow post-scan editing : Yes Nessus Plugin Signature Checking : Enabled Audit File Signature Checking : Disabled Scan Start Date : x/x/x x Scan duration : X sec Scan for malware : no </plugin_output> plugin_id: xx port: 0 protocol: TCP recastRisk: false recastRiskRuleComment: repository: { [+] } riskFactor: None sc_uniqueness: x_x.x.x.x_xxxx.xx.xx seeAlso: seolDate: -1 severity: informational severity_description: Informative severity_id: 0 solution: state: open stigSeverity: synopsis: This plugin displays information about the Nessus scan. temporalScore: uniqueness: repositoryID,ip,dnsName uuid: x-x-x-xx-xxx vendor_severity: Info version: 1.127 vprContext: [] vprScore: vulnPubDate: -1 vulnUUID: vulnUniqueness: repositoryID,ip,port,protocol,pluginID xref: }   in props.conf [tenable:sc:vuln] TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs transforms.conf [tenable_remove_logs] SOURCE_KEY = _raw REGEX = ABCSCAN DEST_KEY = queue FORMAT = nullQueue It is not working. Any solution ?. i have removed  SOURCE_KEY later , that is also not working    
Hi Team, To reduce the time taken to load my Splunk dashboard, I created a new summary index to collect the events which retrieves the past 30 days of events and scheduled the same report to run e... See more...
Hi Team, To reduce the time taken to load my Splunk dashboard, I created a new summary index to collect the events which retrieves the past 30 days of events and scheduled the same report to run every hour and "enabled the summary indexing" as per the documentation: Create a summary index in Splunk Web - Splunk Documentation However, while checking the index, I could see that the data ingestion is not taking place as per the scheduled report. Please find the attached screenshots for additional reference. Looking forward for a workaround or a solution.    
so i have search a which creates a variable from the search results (variableA) i need to search another index using variableA in the source and want to append one column from the second search int... See more...
so i have search a which creates a variable from the search results (variableA) i need to search another index using variableA in the source and want to append one column from the second search into a table with results from the first like this: index=blah source=blah | rex variableA=blah, field1=blah,field2=blah,field3=blah, index=blah source=$variableA$ | rex field4=blah table field1, field2, field3,field4 not sure how this gets done?
I have a search on my dashboard that takes ~20 seconds to complete.  This search is a member of a chain. base:     index=yum sourcetype=woohoo earliest=-12h@h | table device_name field1 field2 fie... See more...
I have a search on my dashboard that takes ~20 seconds to complete.  This search is a member of a chain. base:     index=yum sourcetype=woohoo earliest=-12h@h | table device_name field1 field2 field3     Chained search:     | search field1="$field1_tok$" AND field2="$field2_tok$"     Panel:     | stats sum(field3) as field3 by device_name | sort - field3 | table device_name field3 | head 10     Everything works fine but when I change tokens the panel loads  a cached version of the table with incorrect values. 5 to 10 seconds later the panel updates with the correct values but without any indication. So, Is there a setting to turn off these cached results?  
The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I ... See more...
The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I missing something? https://docs.splunk.com/Documentation/ES/8.0.1/Install/UpgradetoNewVersion https://splunkbase.splunk.com/app/263/
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the ... See more...
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the Event Log Readers group. Unfortunately, that hasn't resolved all access issues. IIS logs for instance .. When I deploy a scripted input to a test client to provide a directory listing of C:\Windows\System32\Logfiles\HTTPERR ... the internal index gets a variety of errors, one of which is included below. (yes, the directory exists) Get-ChildItem : Access to the path 'C:\Windows\System32\Logfiles\HTTPERR' is denied  So, other than having our IT staff reinstall the UF everywhere to run as a System privileged user as it has run in every version I've ever worked with ... How are we to know what Group the SplunkForwarder user needs to be added to read data that is not under the purview of "Event Log Readers"
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the asser... See more...
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the assertion, but Splunk won't let me in. I get this error: "Saml response does not contain group information." I know Splunk looks for a 'role' variable, but our assertion does not return that. Instead, it returns "memberOf", and I added that to authentication.conf: [authenticationResponseAttrMap_SAML] role = memberOf I also map the role under roleMap_SAML. It seems like no matter what I do, no matter what I put, I get the "Saml response does not contain group information." response.  I have a ticket open with tech support, but at the moment, they're not sure what the issue is.  Here's a snippet (masked) of the assertion response: <saml2:Attribute FriendlyName="memberOf" Name="urn:oid:1.2.xxx.xxxxxx.1.2.102" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"> <saml2:AttributeValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xsd:string"> xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:some-group </saml2:AttributeValue> </saml2:Attribute> Feeling out of options, I asked ChatGPT (I know, I know), and it said that the namespace our assertion is using may be the issue. It said that Splunk uses the "saml" namespace, but our IdP is returning "saml2". I don't know if that's the actual issue nor, if it is, what to do about it.  splunkd.log shows the error message that I'm seeing in the web interface: 12-12-2024 15:14:24.611 -0500 ERROR Saml [847764 webui] - No value found in SamlResponse for match key=saml:AttributeStatement/saml:Attribute attrName=memberOf err=No nodes found for xpath=saml:AttributeStatement/saml:Attribute I've looked at the Splunk SAML docs, but don't see anything about namespacing, so maybe ChatGPT just made that up.  What exactly is Splunk looking for that I'm not providing?  If anyone has any suggestions or insight, please let me know. Thank you!
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running... See more...
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running into an issue where I cant get the correct output for how long an asset has been down.  In SolarWinds you can see Trigger time in the Alert Status Overview. This shows the exact date and time the node went down.  I cannot find a field from the raw data between both sourcetypes that will give me that output. I want to use eval to show how much time has passed since the trigger. Does anyone know how to achieve this?     
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’... See more...
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’t have ssl options in inputs.conf just wondered if there was any other locations to check on the universal forwarder as it works fine for other servers.
Hi there! I want to create a scorecard by Manager and Region counting my Orders over Month. So the chart would look something like:  I have all the fields: Region, Director, Month and Order_Numb... See more...
Hi there! I want to create a scorecard by Manager and Region counting my Orders over Month. So the chart would look something like:  I have all the fields: Region, Director, Month and Order_Number to make a count. Please let me know if you have an efficient way to do this in SPL. Thank you very much!    
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal... See more...
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal forwarder on there and I’ve got another windows pc that acts as my enterprise environment. I do know that the forwarder is active and can see a connection. I want to send wineventlog data to splunk. I’ve made a input.conf and output.conf containing information for what I want to forward. But when I want to look it up in the search I have 0 events. I’m sure I’m doing some things wrong haha. I would like some help with it. Thanks! 
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to ... See more...
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to change the template of the email.  is there a way to stop it?   thank you in advance  
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after... See more...
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after "user_email" 3. Line should break at "activity_type" 4. TIME_PREFIX=event_time   i added below but doesn't work removing the lines and TIME_PREFIX  [ sample_json ] BREAK_ONLY_BEFORE=\"activity_type":\s.+, CHARSET=UTF-8 SHOULD_LINEMERGE=true disabled=false LINE_BREAKER=([\r\n]+) TIME_PREFIX=event_time SEDCMD-remove=s/^\{/g   Sample data: { "status": 0, "message": "Request completed successfully", "data": [ { "activity_type": "login", "associated_items": null, "changed_values": null, "event_time": 1733907370512, "id": "XcDutJMBNXQ_Xwfn2wgV", "ip_address": "x.x.x.x", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged in", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1.1 Safari/605.1.15", "user_email": "xyz@example.com" }, { "activity_type": "export", "associated_items": null, "changed_values": null, "event_time": 1732634960475, "id": "bd0XaZMBH5U9RA7biWrq", "ip_address": "", "is_impersonated_user": false, "item": null, "message": "Incident Detail Report generated successfully", "object_id": 0, "object_name": "", "object_type": "breach incident", "source": "", "user_agent": "", "user_email": "" }, { "activity_type": "logout", "associated_items": null, "changed_values": null, "event_time": 1732625563087, "id": "jaGHaJMB-qVJqBPy_3IG", "ip_address": "87.200.106.98", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged out", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36", "user_email": "xyz@example.com" } ], "count": 33830, "meta_info": { "total_rows": 33830, "row_count": 200, "pagination": { "pagination_id": "" } } }
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to ha... See more...
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to have cca the same amount of data on both instances). My idea of process is: Stop all-in-one instance  Create new index(es) on new indexer Stop new indexer Copy (what is best - rsync?) part of buckets in given index(es) from all-in-one instance to new indexer Start new indexer and all-in-one instance Configure outputs.conf on forwarders - add new indexer Add new indexer as search peer to all-in-one instance Would it work or I missed something? Thank you for help. Best regards Lukas Mecir
 hi , I want to extract from  this date 12/11/2024 result should be 12/2024
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see tho... See more...
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see those changes when I am clearing my cache of my browser.
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configur... See more...
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configuration panel. This is simple with the given example with the token set as $method$. "tokens": { "default": { "method": { "value": "GET" } } }   Would anyone be able to advise as to how can I set default tokens of a dashboard (created using Dashboard Studio) if the value is of the panel is pointing to a data source whose query has a dependency to another data source's results? Panel A: Data Source: 'Alpha status' 'Alpha status' query: | eval status=$Beta status:result._statusNumber$     e.g. I need to set a default token value for $Beta status:result._statusNumber$ Thanks in advance for the response.
Hi All, I have few columns with is in the format "21 (31%)" , these are the value and percentage of the value. I want to use MinMidMax for the coloring based on the percentage. But i am not able to ... See more...
Hi All, I have few columns with is in the format "21 (31%)" , these are the value and percentage of the value. I want to use MinMidMax for the coloring based on the percentage. But i am not able to use it directly since it is a customized value. Any one knows any solution for coloring such columns?
Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  ... See more...
Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  [ { "name": "ESSENTIAL", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 17, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "ADVANTAGE", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Jul 09,2024 22:49:25 PM" }, { "name": "PREMIER", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Aug 10,2024 21:10:44 PM" }, { "name": "DEVICEADMIN", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "VM", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" } ] Thanks in advance for any help you all might offer to get me down the right track.