All Topics

Top

All Topics

We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so i... See more...
We are having an issue where in order to see correct JSON syntax highlighting it requires setting "max lines" to "all lines". On a separate post the resolution was to turn off "pretty printing" so instead of each event taking up multiple lines it is only takes up one. Which then allows Splunk to show the data in the correct JSON syntax highlighting. How do I turn this off?
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two h... See more...
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two hours ago for the same corresponding record, which is field name "pv_number" and if the value of field did not change between "now" and "two hours ago", capture it as table showing previous state and current state along with previous time and current time. Any help is greatly appreciated. Thanks much! 
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | f... See more...
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | fields field_3 | mvcombine field_3 delim=" " | nomv field_3 | dedup field_3 | sort field_3 | return $field_3]) | fields field_3 | sort field_3 | mvcombine field_3 delim=" " | nomv field_3])   The deepest subsearch returns a list of managers that report to a director, 10 names.  The subsearch returns a list of users who report to those managers, 1137 names.  If I run the search like this, I get output.   index=index_1 sourcetype=sourcetype_1 field_1 IN (1137 entries)   I can't find a reason that the first search returns this,  'Regex: regular expression is too large', since there is no command that uses regex.  I can run each subsearch without any issues.  I can't find anything in the _internal index.  Any thoughts on why this is happening or a better search? TIA, Joe  
Hola, hoy solicito su ayuda,    Dado que descargue la VMWARE de Splunt para probarlo y ver el funcionamiento, pero no ha sido posible encontrar en la documentación las 
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a ... See more...
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a manual upload of a log sample by going to a SH then settings -> add data -> upload.   When i uploaded a log, the sourcetype _json was automatically selected. In the preview panel, everything looked good, so i saved the sourcetype as foo. i completed the upload into a index=test.  Looked at the data, everything was good, in the " interesting fields " pane on the left had the auto extractions completed. in ../apps/search/local/props.conf, an entry was created... [foo] KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true BREAK_ONLY_BEFORE = INDEXED_EXTRACTIONS = json I noticed it used INDEXED_EXTRACTIONS, which is not what i wanted (have never used indexed_extractions before), but figured these are just occasional scan logs which were literally just a few kilobytes every now and then, so wasn't a big deal. I copied the above sourcetype stanza to an app in the cluster masters-manager apps folder in an app that i have a bunch of random one off props.conf sourcetype stanzas,  then pushed it out to the IDX cluster.  Then created an inputs.conf  and server class on the DS to push to the particular forwarder that monitors the folder for the appropriate JSON scan logs.   As expected, eventually the scan logs started being indexed and viewable on the Search head. Unfortunately, the auto extractions were not being parsed. The Interesting fields panel on the left only had the default fields. on the Right panel where the logs are the Fields names were highlighted in Red, which i guess means splunk recognizes the field names??  But either way the issue is i had no interesting fields. I figured maybe the issue was on the Search heads i had the " indexed extractions"  set and figure thats probably an indexer setting,    so i commented that out and tried using KV_MODE=json in its place. saved the .conf file and restarted the SH.  But the issue remains.. no interesting fields. The test upload worked just fine and i had interesting fields in the test index,  however when the logs started coming through from the UF,  I no longer had interesting fields despite using the same sourcetype. What am i missing? is there more to ingesting a JSON file then simply just using  kv_mode  or indexed_extractions?  but then why does my test upload work ? here is a sample log:   {"createdAt": "2024-09-04T15:23:12-04:00", "description": "bunch of words.", "detectorId": "text/hardcoded-credentials@v1.0", "detectorName": "Hardcoded credentials", "detectorTags": ["secrets", "security", "owasp-top10", "top25-cwes", "cwe-798", "Text"], "generatorId": "something", "id": "LongIDstring", "remediation": {"recommendation": {"text": "a bunch of text.", "url": "a url"}}, "resource": {"id": "oit-aws-codescan"}, "ruleId": "multilanguage-password", "severity": "Critical", "status": "Open", "title": "CWE-xxx - Hardcoded credentials", "type": "Software and Configuration Checks", "updatedAt": "2024-09-18T10:54:02.916000-04:00", "vulnerability": {"filePath": {"codeSnippet": [{"content": " ftp_site = 'something.com'", "number": 139}, {"content": " ftp_base = '/somesite/'", "number": 140}, {"content": " ftp_filename_ext = '.csv'", "number": 111}, {"content": " ", "number": 111}, {"content": " ftp_username = 'anonymous'", "number": 111}, {"content": " ftp_password = 'a****'", "number": 111}, {"content": "", "number": 111}, {"content": " # -- DOWNLOAD DATA -----", "number": 111}, {"content": " # Put all of the data pulls within a try-except case to protect against crashing", "number": 111}, {"content": "", "number": 148}, {"content": " email_alert_sent = False", "number": 111}], "endLine": 111, "name": "somethingsomething.py", "path": "something.py", "startLine": 111}, "id": "LongIDstring", "referenceUrls": [], "relatedVulnerabilities": ["CWE-xxx"]}}   I appreciate any guidance..
We had some message-parsing issue in IBM Integration Bus message flow after enabling the exit of AppD agent. The AppD header singularityheader injected in the message header caused the error "An inva... See more...
We had some message-parsing issue in IBM Integration Bus message flow after enabling the exit of AppD agent. The AppD header singularityheader injected in the message header caused the error "An invalid XML character (Unicode: 0x73) was found in the prolog of the document" when SOAPRequestNode parses the XML message body . 0x73 is the first letter of header name "singularityheader".  So the flow counted the header singularityheader as beginning part of the message content, which makes the content as invalid XML format.  This happened in the message flows that have SOAP Request node.  Is there anyone had the similar issue? Any advice will be appreciated. 
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. An... See more...
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. Any suggestions would be welcome thanks.    Splunk 9.3.1 Splunk Security Essentials 3.8.0 Error:  There was an error processing the upload. Error during app install: failed to extract app from /tmp/tmp6xz06m51 to /opt/splunk/var/run/splunk/bundle_tmp/7364272378fc0528: No such file or directory  
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the ... See more...
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the end of the query. To verify that the query would work, I changed it "| where count < 5", but I'm getting no results.  
This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest Processor and provide guidance on when to use which through the application of two key use cases in Security an... See more...
This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest Processor and provide guidance on when to use which through the application of two key use cases in Security and Observability. Key Takeaways Learn how to use Edge Processor to optimize for SOC2 compliance and to reduce egress costs when coupled with Federated Search Learn how to use Ingest Processor to enrich observability data in service contexts where you’ve not implemented telemetry. Watch Metrics Demo: Watch Full Data Demo:  
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the sta... See more...
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the start button it gets stuck at 0%. can i know what could be this issue?  
3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud? Learn how unique features like Service Centric Views, Tag Spotlight, and Trace Analyzer together with your existing Splunk d... See more...
3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud? Learn how unique features like Service Centric Views, Tag Spotlight, and Trace Analyzer together with your existing Splunk deployment can help you get to the leaderboard! Ask questions of product experts, watch demos and learn more about the product! Key Takeaways How Splunk can help you debug problems in microservices faster Splunk tools that can help you accelerate troubleshooting How to use Splunk Cloud/Enterprise for additional use cases   Watch the full Tech Talk here or view individual demos from the live event below
Yesterday the entire team at Splunk + Cisco joined the global celebration of CX Day - celebrating our customers, communities, and our employees. As always, we were delighted to recognize the importan... See more...
Yesterday the entire team at Splunk + Cisco joined the global celebration of CX Day - celebrating our customers, communities, and our employees. As always, we were delighted to recognize the important role that customer experience plays in both our products but also in the experiences that we enable for our customers and partners around the world.  Yesterday, we hosted a LinkedIn Live Discussion moderated by Splunk’s Chief Customer Officer and SVP of Customer and Partner Success at Cisco, Toni Pavlovich with Splunk customers, Dr. Bonnie An Henderson, President and CEO of HelpMeSee and Leonard Wall, Deputy CISO from Clayton Homes. During this discussion they shared their mission on creating a brighter future for all, innovative strategies for elevating customer and employee experiences, delved into key challenges and best practices, and explored the transformative impact of data on outcomes. Additionally, we wanted to give our community an opportunity to discover your customer experience emotional quotient - and why this can be important to you as an employee or how you serve your customers in this digital marketplace. Discover your CX EQ > Finally, we had the distinct pleasure of featuring our customer’s voices in a series of videos. Our customers, including some community members described Splunk in one word, told us about their favorite ‘aha’ moment, discussed their business outcomes, and provided a look at how our CX focus impacted their organization.  Thank you for being a part of our Splunk Community, we celebrate you, our customers every day!
"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class... See more...
"c7n:MatchedFilters": [ "tag:ApplicationFailoverGroup", "tag:AppTier", "tag:Attributes", "tag:DBNodes", "tag:rk_aws_native_account_id", "tag:rk_cluster_id", "tag:rk_component", "tag:rk_instance_class", "tag:rk_job_id", "tag:rk_managed", "tag:rk_object", "tag:rk_restore_source_region", "tag:rk_restore_timestamp", "tag:rk_source_snapshot_native_id", "tag:rk_source_vm_native_id", "tag:rk_source_vm_native_name", "tag:rk_taskchain_id", "tag:rk_user", "tag:rk_version" ]
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to sele... See more...
Hello Splunkers,  I started to use splunk uni forwarder in my job and I am kinda new to systems. My dashboard working good with standart ALL option in multiselection but when it comes to select multiple indexes from menu I've got a huge problem. My multiselect search index is: index="myindex" sourcetype="pinginfo" source="C:\\a\\b\\c\\d\\e\\f f\\g\\h\\ı-i-j\\porty*" |table source |dedup source   but when I pass  this token to reports as: $multi_token$ | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source    it creates a search string with only single backslashes but double back slashes.. source="C:\a\b\c\d\e\f f\e\g\ı-i-j\porty102" | eval ping_error=case( like(_raw, "%Request Timeout%"), "Request_Timeout", like(_raw, "%Destination Host Unreachable%"), "Destination_Host_Unreachable") | where isnotnull(ping_error) AND NOT like(_raw, "%x.y.z.net%") | stats count as total_errors by _time, source | timechart span=1h sum(total_errors) as total_errors by source   I've tried so many things but couldn't be able to solve it.  Important Note: In multiselect dropdown menu  elements are shown with their whole source adrees such as: C:\a\b\c\d\e\f f\d\e\ı-i-j\porty102 Couldn't be able to show this also. I can't change anything about splunk universal forwarders settings or the source adress because restrictions are so strict in the company. Regards
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions i... See more...
I'm trying to get my custom python generating command to output warnings or alerts below the search bar in the UI. If I raise an exception, it displays there automatically, but like all exceptions it's messy.  I'd like to be able to catch the exception and format it correctly, or better still just write out the warning message for it to be picked up by the GUI. It looks like I should create some custom [messages.conf] stanza and include that Name and formatting in the message. [PYTHONSCRIPT:RESULTS_S_S] message = Expecting %s results, server provided %s The current logging is going to search.log (CHUNKEDEXTPROCESSORVIASTDERRLOGGER) but not reaching info.csv (infoPath). Thanks in advance    
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvi... See more...
Hello everyone,   I have created a query that list sourectypes :  index=_audit action=search info=granted source="*metrics.log" group="per_sourcetype_thruput" | eval _raw=search | eval _raw=mvindex(split(_raw,"|"),0) | table _raw | extract | stats count by sourcetype | eval hasBeenSearched=1 | append [| metadata index=* type="sourcetypes" | eval hasBeenSearched="0"] | chart sum(kb) by series | sort - sum(kb) | search hasBeenSearched="0" | search NOT[inputlookup sourcetypes_1.csv | fields sourcetype] I would want to modify this query such that it also enlists the volume  ingestion  of these sourcetypes as well...Kindly suggest
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to o... See more...
  Using dashboard studio i have my data source for one panel then a chained datasource for another panel. The first panel is a barchart of counts by day, the second is a moving average. Trying to overlay the moving average on top of the barchart. Have done this in classic using overlays, but in studio dont know how to reference the chained datasource results in the first panel. For example my bar chart visualization code looks like this. In overlay fields i tried to explicitly reference the data source name but doesnt seem to work. i know both queries/data sources are working as my base search works and my chained search works when show in separate panels. { "type": "splunk.column", "dataSources": { "primary": "ds_C2wKdHsA" }, "title": "Per Day Count", "options": { "y": "> primary | frameBySeriesNames('NULL','_span','_spandays')", "legendTruncation": "ellipsisOff", "legendDisplay": "off", "xAxisTitleVisibility": "hide", "xAxisLabelRotation": -45, "yAxisTitleVisibility": "hide", "overlayFields": "$chaineddatasource_ByDayMA:result.gpsreHaltedJobsMA$", "axisY2.enabled": true, "dataValuesDisplay": "all" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more da... See more...
Good day, I have done a join on two indexes before to add more information to one event. example get department for a user from network events. But now I want to add two indexes to give me more data.  Example index one will display: host 1 10.0.0.2 host 2 10.0.0.3 And index two will display:  host 3 10.0.0.4 host 1 10.0.0.2 What I want is: host 1 10.0.0.2 host 2 10.0.0.3 host 3 10.0.0.4 index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | dedup object | where command!="MICROSOFT.COMPUTE/VIRTUALMACHINES/DELETE" | table change_type object resource_group subscription command _time | sort object asc index=* sourcetype=o365:management:activity | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceName" as ResourceName | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.CloudProvider" as CloudProvider | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.ResourceType" as ResourceTypes | rename "PropertyBag{}.AssessmentStatusPerInitiative{}.EventType" as EventType | where ResourceTypes="Microsoft.Compute/virtualMachines" OR ResourceTypes="microsoft.compute/virtualmachines" | eval object=mvdedup(split(ResourceName," ")) | eval Provider=mvdedup(split(CloudProvider," ")) | eval Type=mvdedup(split(ResourceTypes," ")) | dedup object | where EventType!="Microsoft.Security/assessments/Delete" | table object, Provider, Type * | sort object asc
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and alread... See more...
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and already running on Redhat. so we will not be touching CM. What should be the approach here and how should we plan this activity ? Any high level steps would be highly recommended.
I saw News about BOTS v9 on Celebrating 2024 Worldwide BOTS Day | Splunk  here! I wanna participatin v9 as a Splunk lover. So I register it on portal. As it is written on article, there are two sess... See more...
I saw News about BOTS v9 on Celebrating 2024 Worldwide BOTS Day | Splunk  here! I wanna participatin v9 as a Splunk lover. So I register it on portal. As it is written on article, there are two session on portal. And I can jon both...? Is it okay? I cant find any notice about I can participate only one session. in common sense.... yes I should one. however I already enroll both. (If i should do only one, I wanna cancel one.) Anyone help