All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so i... See more...
We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so instead of each event taking up multiple lines it is only takes up one. Which then allows Splunk to show the data in the correct JSON syntax highlighting. How do I turn this off?
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two h... See more...
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two hours ago for the same corresponding record, which is field name "pv_number" and if the value of field did not change between "now" and "two hours ago", capture it as table showing previous state and current state along with previous time and current time. Any help is greatly appreciated. Thanks much! 
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | f... See more...
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | fields field_3 | mvcombine field_3 delim=" " | nomv field_3 | dedup field_3 | sort field_3 | return $field_3]) | fields field_3 | sort field_3 | mvcombine field_3 delim=" " | nomv field_3])   The deepest subsearch returns a list of managers that report to a director, 10 names.  The subsearch returns a list of users who report to those managers, 1137 names.  If I run the search like this, I get output.   index=index_1 sourcetype=sourcetype_1 field_1 IN (1137 entries)   I can't find a reason that the first search returns this,  'Regex: regular expression is too large', since there is no command that uses regex.  I can run each subsearch without any issues.  I can't find anything in the _internal index.  Any thoughts on why this is happening or a better search? TIA, Joe  
Hola, hoy solicito su ayuda,    Dado que descargue la VMWARE de Splunt para probarlo y ver el funcionamiento, pero no ha sido posible encontrar en la documentación las 
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a ... See more...
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a manual upload of a log sample by going to a SH then settings -> add data -> upload.   When i uploaded a log, the sourcetype _json was automatically selected. In the preview panel, everything looked good, so i saved the sourcetype as foo. i completed the upload into a index=test.  Looked at the data, everything was good, in the " interesting fields " pane on the left had the auto extractions completed. in ../apps/search/local/props.conf, an entry was created... [foo] KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true BREAK_ONLY_BEFORE = INDEXED_EXTRACTIONS = json I noticed it used INDEXED_EXTRACTIONS, which is not what i wanted (have never used indexed_extractions before), but figured these are just occasional scan logs which were literally just a few kilobytes every now and then, so wasn't a big deal. I copied the above sourcetype stanza to an app in the cluster masters-manager apps folder in an app that i have a bunch of random one off props.conf sourcetype stanzas,  then pushed it out to the IDX cluster.  Then created an inputs.conf  and server class on the DS to push to the particular forwarder that monitors the folder for the appropriate JSON scan logs.   As expected, eventually the scan logs started being indexed and viewable on the Search head. Unfortunately, the auto extractions were not being parsed. The Interesting fields panel on the left only had the default fields. on the Right panel where the logs are the Fields names were highlighted in Red, which i guess means splunk recognizes the field names??  But either way the issue is i had no interesting fields. I figured maybe the issue was on the Search heads i had the " indexed extractions"  set and figure thats probably an indexer setting,    so i commented that out and tried using KV_MODE=json in its place. saved the .conf file and restarted the SH.  But the issue remains.. no interesting fields. The test upload worked just fine and i had interesting fields in the test index,  however when the logs started coming through from the UF,  I no longer had interesting fields despite using the same sourcetype. What am i missing? is there more to ingesting a JSON file then simply just using  kv_mode  or indexed_extractions?  but then why does my test upload work ? here is a sample log:   {"createdAt": "2024-09-04T15:23:12-04:00", "description": "bunch of words.", "detectorId": "text/hardcoded-credentials@v1.0", "detectorName": "Hardcoded credentials", "detectorTags": ["secrets", "security", "owasp-top10", "top25-cwes", "cwe-798", "Text"], "generatorId": "something", "id": "LongIDstring", "remediation": {"recommendation": {"text": "a bunch of text.", "url": "a url"}}, "resource": {"id": "oit-aws-codescan"}, "ruleId": "multilanguage-password", "severity": "Critical", "status": "Open", "title": "CWE-xxx - Hardcoded credentials", "type": "Software and Configuration Checks", "updatedAt": "2024-09-18T10:54:02.916000-04:00", "vulnerability": {"filePath": {"codeSnippet": [{"content": " ftp_site = 'something.com'", "number": 139}, {"content": " ftp_base = '/somesite/'", "number": 140}, {"content": " ftp_filename_ext = '.csv'", "number": 111}, {"content": " ", "number": 111}, {"content": " ftp_username = 'anonymous'", "number": 111}, {"content": " ftp_password = 'a****'", "number": 111}, {"content": "", "number": 111}, {"content": " # -- DOWNLOAD DATA -----", "number": 111}, {"content": " # Put all of the data pulls within a try-except case to protect against crashing", "number": 111}, {"content": "", "number": 148}, {"content": " email_alert_sent = False", "number": 111}], "endLine": 111, "name": "somethingsomething.py", "path": "something.py", "startLine": 111}, "id": "LongIDstring", "referenceUrls": [], "relatedVulnerabilities": ["CWE-xxx"]}}   I appreciate any guidance..
We had some message-parsing issue in IBM Integration Bus message flow after enabling the exit of AppD agent. The AppD header singularityheader injected in the message header caused the error "An inva... See more...
We had some message-parsing issue in IBM Integration Bus message flow after enabling the exit of AppD agent. The AppD header singularityheader injected in the message header caused the error "An invalid XML character (Unicode: 0x73) was found in the prolog of the document" when SOAPRequestNode parses the XML message body . 0x73 is the first letter of header name "singularityheader".  So the flow counted the header singularityheader as beginning part of the message content, which makes the content as invalid XML format.  This happened in the message flows that have SOAP Request node.  Is there anyone had the similar issue? Any advice will be appreciated. 
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. An... See more...
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. Any suggestions would be welcome thanks.    Splunk 9.3.1 Splunk Security Essentials 3.8.0 Error:  There was an error processing the upload. Error during app install: failed to extract app from /tmp/tmp6xz06m51 to /opt/splunk/var/run/splunk/bundle_tmp/7364272378fc0528: No such file or directory  
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the ... See more...
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the end of the query. To verify that the query would work, I changed it "| where count < 5", but I'm getting no results.  
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the sta... See more...
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the start button it gets stuck at 0%. can i know what could be this issue?  
"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class... See more...
"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class", "tag:rk_job_id", "tag:rk_managed", "tag:rk_object", "tag:rk_restore_source_region", "tag:rk_restore_timestamp", "tag:rk_source_snapshot_native_id", "tag:rk_source_vm_native_id", "tag:rk_source_vm_native_name", "tag:rk_taskchain_id", "tag:rk_user", "tag:rk_version" ]
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to sele... See more...
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to select multiple indexes from menu I've got a huge problem. My multiselect search index is: index="myindex" sourcetype="pinginfo" source="C:\\a\\b\\c\\d\\e\\f f\\g\\h\\ı-i-j\\porty*" |table source |dedup source   but when I pass  this token to reports as: $multi_token$ | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source    it creates a search string with only single backslashes but double back slashes.. source="C:\a\b\c\d\e\f f\e\g\ı-i-j\porty102" | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source   I've tried so many things but couldn't be able to solve it.  Important Note: In multiselect dropdown menu  elements are shown with their whole source adrees such as: C:\a\b\c\d\e\f f\d\e\ı-i-j\porty102 Couldn't be able to show this also. I can't change anything about splunk universal forwarders settings or the source adress because restrictions are so strict in the company. Regards
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions i... See more...
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions it's messy.  I'd like to be able to catch the exception and format it correctly, or better still just write out the warning message for it to be picked up by the GUI. It looks like I should create some custom [messages.conf] stanza and include that Name and formatting in the message. [PYTHONSCRIPT:RESULTS_S_S] message = Expecting %s results, server provided %s The current logging is going to search.log (CHUNKEDEXTPROCESSORVIASTDERRLOGGER) but not reaching info.csv (infoPath). Thanks in advance    
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvi... See more...
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvindex(split(_raw,"|"),0) | table _raw | extract | stats count by sourcetype | eval hasBeenSearched=1 | append [| metadata index=* type="sourcetypes" | eval hasBeenSearched="0"] | chart sum(kb) by series | sort - sum(kb) | search hasBeenSearched="0" | search NOT[inputlookup sourcetypes_1.csv | fields sourcetype] I would want to modify this query such that it also enlists the volume  ingestion  of these sourcetypes as well...Kindly suggest
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to o... See more...
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to overlay the moving average on top of the barchart. Have done this in classic using overlays, but in studio dont know how to reference the chained datasource results in the first panel. For example my bar chart visualization code looks like this. In overlay fields i tried to explicitly reference the data source name but doesnt seem to work. i know both queries/data sources are working as my base search works and my chained search works when show in separate panels. { "type": "splunk.column", "dataSources": { "primary": "ds_C2wKdHsA" }, "title": "Per Day Count", "options": { "y": "> primary | frameBySeriesNames('NULL','_span','_spandays')", "legendTruncation": "ellipsisOff", "legendDisplay": "off", "xAxisTitleVisibility": "hide", "xAxisLabelRotation": -45, "yAxisTitleVisibility": "hide", "overlayFields": "$chaineddatasource_ByDayMA:result.gpsreHaltedJobsMA$", "axisY2.enabled": true, "dataValuesDisplay": "all" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more da... See more...
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more data.  Example index one will display: host 1 10.0.0.2 host 2 10.0.0.3 And index two will display:  host 3 10.0.0.4 host 1 10.0.0.2 What I want is: host 1 10.0.0.2 host 2 10.0.0.3 host 3 10.0.0.4 index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | dedup object | where command!="MICROSOFT.COMPUTE/VIRTUALMACHINES/DELETE" | table change_type object resource_group subscription command _time | sort object asc index=* sourcetype=o365:management:activity | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceName" as ResourceName | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.CloudProvider" as CloudProvider | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceType" as ResourceTypes | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.EventType" as EventType | where ResourceTypes="Microsoft.Compute/virtualMachines" OR ResourceTypes="microsoft.compute/virtualmachines" | eval object=mvdedup(split(ResourceName," ")) | eval Provider=mvdedup(split(CloudProvider," ")) | eval Type=mvdedup(split(ResourceTypes," ")) | dedup object | where EventType!="Microsoft.Security/assessments/Delete" | table object, Provider, Type * | sort object asc
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and alread... See more...
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and already running on Redhat. so we will not be touching CM. What should be the approach here and how should we plan this activity ? Any high level steps would be highly recommended.
Good day, I am trying to find the latest event for my virtual machines to determine if they are still active or decommissioned. The object is the hostname and the command is where I can see if a d... See more...
Good day, I am trying to find the latest event for my virtual machines to determine if they are still active or decommissioned. The object is the hostname and the command is where I can see if a device was deleted or just started. I will then afterwards add the command!="*DELETE" index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | stats max(_time) as time by command object subscription change_type resource_group | convert ctime(time) ```| dedup object``` | table change_type object resource_group subscription command time | sort object asc
Can't hot bucket just roll directly to cold bucket? Or it's not possible? Does it have anything to do with the fact that the hot bucket is actively getting written to? Can anyone please shed some lig... See more...
Can't hot bucket just roll directly to cold bucket? Or it's not possible? Does it have anything to do with the fact that the hot bucket is actively getting written to? Can anyone please shed some light on this on a technical level as I'm not getting the answer I'm looking for from the documentations. Thanks in advance.
How do I generate reports and run stats on key=value from just message field . Ignoring rest of the fields.  {"cluster_id":"cluster", "message":"Excel someType=MY_TYPE totalItems=1 errors=ABC, X... See more...
How do I generate reports and run stats on key=value from just message field . Ignoring rest of the fields.  {"cluster_id":"cluster", "message":"Excel someType=MY_TYPE totalItems=1 errors=ABC, XYZ status=success","source":"some_data"}   Gone through multiple examples but could not find something concrete that will help me group by on  key someType, compute stats on totalItems, list top errors ABC, XYZ These don't have to be in the same query. I assume top errors grouping would be a separate query.
Our MySQL server was upgraded from 5.7 to 8.0.37, and the MariaDB plugin no longer supports exporting audit log files. Are there any methods to export audit logs in a Windows environment?