All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

index=* "23.216.147.64"   Above is my filter, I'm trying to get all the records of that IP address; is this filter correct? please help    thanks tony
I'm trying to configure an automatic lookup and match multivalue field of IP addresses (in the lookup) on an IP field (in the SPL results). The lookup is a KV Store, and the definition targets that c... See more...
I'm trying to configure an automatic lookup and match multivalue field of IP addresses (in the lookup) on an IP field (in the SPL results). The lookup is a KV Store, and the definition targets that collection. When I run the lookup definition manually, it works fine. index=my_index eventtype=my_event_type | lookup lookup_definition ip_mv AS ip OUTPUTNEW dns However, when I create an automatic lookup using the same information, it doesn't work. Any ideas?
how do I escape single quote within DBXquery SQL like command For example:    content = '. . . . . .  src_port': 20, 'dst_port': 21     .....   '  there is space after colon : | dbxquery connectio... See more...
how do I escape single quote within DBXquery SQL like command For example:    content = '. . . . . .  src_port': 20, 'dst_port': 21     .....   '  there is space after colon : | dbxquery connection=visibility query="select  content from DB where content like '%port\'\:\s20\,' " This query gave me an error.  I already tried to escape single quote with single quote, but did gave me 0 result Thanks
We are attempting to include links to dashboards and external sites within the "Notable Fields" section.  We have tried using <a> and <link> and various configurations of tags and have not been able ... See more...
We are attempting to include links to dashboards and external sites within the "Notable Fields" section.  We have tried using <a> and <link> and various configurations of tags and have not been able to figure out how to get anything to display as a clickable link.  Are clickable links an option with this app? THanks
Hello, Has anyone been able to ingest Certificate Transparency logs (via Certificate Transparency Log add-on for Splunk or any other app) on Splunk Enterprise version 9?
Hello Splunkers, I have an event like this:     blocked,Adware,ABCD,test.exe,\\program_files\c\Drivers\,,,Generic PUA JB,,Endpoint Protection     I am extracting fields using comma sepa... See more...
Hello Splunkers, I have an event like this:     blocked,Adware,ABCD,test.exe,\\program_files\c\Drivers\,,,Generic PUA JB,,Endpoint Protection     I am extracting fields using comma separator delimiter, so my props.conf and transform.conf is:     transforms.conf [cs_srctype] CLEAN_KEYS = 0 DELIMS = , FIELDS = action,category,dest,file_name,file_path,severity,severity_id,signature,signature_id,vendor_product props.conf [cs_srctype] KV_MODE = none REPORT-cs_srctype = cs_srctype     Now the output that I am getting is : file_path = \\program_files\c\Drivers\, severity= severity_id= Generic PUA GB signature= signature_id= Endpoint Protection vendor_product= All the fields before file_path are getting extracted properly and after file_path are incorrect because it's adding comma and thus not separating properly. how do I ignore the \, and extract the fields properly. Thank you in advance
I am relatively new to Splunk search and I am trying to build a table from my splunk search results. Can someone please help me to build a table using following JSON My search results as follows ... See more...
I am relatively new to Splunk search and I am trying to build a table from my splunk search results. Can someone please help me to build a table using following JSON My search results as follows ``` { [-] docker: { [+] } kubernetes: { [+] } log: LOGGER {"name":"some text here","pathname":"/some/path","timestamp":"2023-05-03T20:35:06Z","action":"pageview","payload":{"category":"cloths","country":"US","appEnv":"production"},"uID":"0023493543"} stream: stdout } ``` raw text: ``` { "stream": "stdout", "log": "LOGGER {\"name\":\"Some text here\",\"pathname\":\"/some/path\",\"timestamp\":\"2023-05-04T10:44:05Z\",\"action\":\"pageview\",\"payload\":{\"category\":\"cloths\",\"country\":\"US\",\"appEnv\":\"production\"},\"uID\":\"0023493543\"}", "docker": { "container_id": "xxxxxxxxxxxx" }, "kubernetes": { "container_name": "xxxxxx", ..... }, "labels": { ..... }, "namespace_id": "xxxx-xxx-xxx-xxx", "namespace_labels": { "application-id": "48928423", "namespace": "849328932-243232xxxx", ........ } } } ``` From this I would like draw the table as | uID | pathname | category | eventName | country | timestamp | | 0023493543 | /some/path | cloths | some text here | US | | ---- | ---- | ---- | ---- | ---- | ---- |   I have tried building table using spath, eval, extract commands but none of tries gives me any desired result. If its a plain JSON object in log field, I managed to build a query for few selected fields, but as its a text String and JSON in it, I am not sure, how to extract my fields. I am expecting a table as shown above, later I can modify query for my complex result. I have tried the following query, ``` BASE SEARCH | spath path=log | rex field=log max_match=0 "name\W+(?<name>[^\"]+)" | rex field=log max_match=0 "pathname\W+(?<pathname>[^\"]+)" | rex field=log max_match=0 "timestamp\W+(?<timestamp>[^\"]+)" | rex field=log max_match=0 "category\W+(?<category>[^\"]+)" | rex field=log max_match=0 "country\W+(?<country>[^\"]+)" | rex field=log max_match=0 "uID\W+(?<uID>\w+)" | table uID, pathname, category, name, country, timestamp ``` which gives me desired result, except name field. It gives me additional text as ``` some text here some/path ``` but I need only `some text here`
I'm trying to set static colors for different bars in a bar chart and sections of a pie chart on a dashboard but nothing I've done seems to work. The bar and pie charts show similar data with the sam... See more...
I'm trying to set static colors for different bars in a bar chart and sections of a pie chart on a dashboard but nothing I've done seems to work. The bar and pie charts show similar data with the same split by field (Domain) so I want to have the representation for each "split by" field match by color at all times. Pie: | my search... | top Domain Bar: | my search... | chart eval(round(avg(DHM),1)) as MTTR by Domain <option name="charting.fieldColors">{"DomainA": 0x4169E1, "DomainB": 0xA9A9A9, "DomainC": 0xFF0000, "DomainA, DomainB": 0x008000, "DomainA, DomainC": 0x9400D3, "DomainC, DomainB": 0xFF00FF}</option>  
I have an index populated with data from a Log4Net trace log.  Each Splunk event in the index is a block of XML with an XML namespace:   <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E... See more...
I have an index populated with data from a Log4Net trace log.  Each Splunk event in the index is a block of XML with an XML namespace:   <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>0</EventID> <Type>3</Type> <SubType Name="Information">0</SubType> <Level>8</Level> <TimeCreated SystemTime="04/22/2023 12:30:45.0456293Z"/> <Source Name="Bar"/> <Correlation ActivityID="{459d276d-8255-47be-be1d-9acd903fd3f0}"/> <Execution ProcessName="NA" ProcessID="1124" ThreadID="9"/> <Channel/> <Computer>NA</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord Severity="Information"> <TraceIdentifier/> <Description><![CDATA[Start Operation: foo]]></Description> <Activity><![CDATA[Start Operation: foo]]></Activity> <Duration>0</Duration> </TraceRecord> </DataItem> </TraceData> </ApplicationData> </E2ETraceEvent>     I am trying to search these events, and I want the contents of the "Description" XML element.  Does Splunk's implementation of XPATH support namespaces?   This search is returning no records:   index="lmstracelogs" xpath "//E2ETraceEvent/ApplicationData/TraceData/DataItem/TraceRecord/Description"     Almost every example I've found for working with XML suggests regular expressions, which seems inelegant.
Hello everyone, I recently migrated from and old running hardware to a newer hardware and started to index the same that that I was indexing before (njmon - json version of nmon). In the old infr... See more...
Hello everyone, I recently migrated from and old running hardware to a newer hardware and started to index the same that that I was indexing before (njmon - json version of nmon). In the old infra, same Splunk version running on both environments, it has no big issues and it has lower capacity than in the new one. Also, in the newer one, we're using faster disks (nmvme). After 1-3 days injecting njmon data, the indexer crashes and during this time, I can see a lot of splunkd recover-metada processes and also splunkd fsck --log-to--splunkd-log repair processes: [root@splunk]# ps -ef | grep splunkd splunk 21828 16396 99 12:00 ? 02:59:44 splunkd fsck --log-to--splunkd-log repair --try-warm-then-cold --one-bucket --index-name=njmon --bucket-name=db_1683179867_1683170671_48 --bloomfilter-only splunk 21829 21828 0 12:00 ? 00:00:00 splunkd fsck --log-to--splunkd-log repair --try-warm-then-cold --one-bucket --index-name=njmon --bucket-name=db_1683179867_1683170671_48 --bloomfilter-only splunk 41284 16396 99 12:20 ? 02:40:30 splunkd recover-metadata /net/splunk/fs0/splunk-hotwarm/njmon/db/db_1683195586_1683179630_51 --handle-roll njmon /net/splunk/fs0/splunk-hotwarm/njmon/db/db_1683195586_1683179630_51 --write-level 4 --tsidx-target-size 1572864000 --msidx-comp-block-size 1024 splunk 80067 16396 99 12:59 ? 02:00:53 splunkd recover-metadata /net/splunk/fs0/splunk-hotwarm/njmon/db/db_1683197989_1683180366_54 --handle-roll njmon /net/splunk/fs0/splunk-hotwarm/njmon/db/db_1683197989_1683180366_54 --write-level 4 --tsidx-target-size 1572864000 --msidx-comp-block-size 1024 splunk 136806 16396 99 13:44 ? 01:16:45 splunkd recover-metadata /net/splunk/fs0/splunk-hotwarm/njmon/db/db_1683200654_1683180434_53 --handle-roll njmon /net/splunk/fs0/splunk-hotwarm/njmon/db/db_1683200654_1683180434_53 --write-level 4 --tsidx-target-size 1572864000 --msidx-comp-block-size 1024   The server is running RHEL 8, 128 RAM, 48 Physical Procs, 96 Logical. Splunk Version: Splunk 8.2.10 (build 417e74d5c950)   The difference between old infra and new one is the tsidxlevel. In the old infra, we're using 2, in the newer one, we're using 4, but all the environments using the data are using the version greater or equal to 8.2.   Any hints from the community?
When I try to open ES incident review  I am getting saying  error "KV Store is initializing. Please try again later." why I am getting this and How do I resolve the issue?
Hello, I have install Splunk Enterprise Server 9.0.4 which Im using for HTTP Event Collector Now I have configured Envoy proxy to send his access logs to the same Splunk server (fluentd) Any... See more...
Hello, I have install Splunk Enterprise Server 9.0.4 which Im using for HTTP Event Collector Now I have configured Envoy proxy to send his access logs to the same Splunk server (fluentd) Any idea/help with how can see them on Splunk server - which Data input should used what indexing ?
Hi Team, Please suggest me to ingest the Jan month data into Splunk. Those files are CSV files and its contains 18gb size and total 4 days data has to sent to Splunk index. please let us know th... See more...
Hi Team, Please suggest me to ingest the Jan month data into Splunk. Those files are CSV files and its contains 18gb size and total 4 days data has to sent to Splunk index. please let us know the possibilities to ingest old data. Thanks in advance!!
Hi, There seems to be an error in Cloud Splunk, can anyone reproduce? Make a search that returns some data (in JSON). E.g: index="dev" source="TestService" On the result, click one field in t... See more...
Hi, There seems to be an error in Cloud Splunk, can anyone reproduce? Make a search that returns some data (in JSON). E.g: index="dev" source="TestService" On the result, click one field in the JSON which should bring up a box. Then, next to "Add to search" click on the arrow. Expected result is that a new tab opens in your browser with the new command added to the existing query (example search string below). This is what happens in on-prem Splunk instances.     index="dev" source="TestService" | spath LogLevel | search LogLevel=Information     Actual result in Splunk Cloud version is that the existing search is cleared and a new search is performed with only the new command     spath LogLevel | search LogLevel=Information    
Hi, Has anyone worked on "microsoft teams rooms pro management" and Splunk integration? Requirement is to get data from this console into Splunk and use for reporting/alerting purpose.   Thanks... See more...
Hi, Has anyone worked on "microsoft teams rooms pro management" and Splunk integration? Requirement is to get data from this console into Splunk and use for reporting/alerting purpose.   Thanks 
I'm trying to use tstats to calculate the daily total number of events for an index per day for one week. Then calculate an averade per day for the entire week, as well as upper and lower bounds +/- ... See more...
I'm trying to use tstats to calculate the daily total number of events for an index per day for one week. Then calculate an averade per day for the entire week, as well as upper and lower bounds +/- 1 standard deviation. Feels like I can get each individual thing to work, either the bar chart with the daily count, OR the statistics. Though I want them all in a single figure. This is my base search:   | tstats count as daily_events where index="index" earliest=-7d latest=-1d by _time span=1d # | timechart span=1d sum(daily_events) as daily_events   Which seems to work just fine by itself, with or without a following "timechart"   Now I figure I'd do something like this:   | stats sum(daily_events) as total_events stdev(daily_events) as std_dev | eval avg_events=round(total_events/7, 0) | eval upper = total_events + std_dev | eval lower = total_events - std_dev    Which by itself seems to work, though I now loose the results from the base search. So how can I combine these things (can I combine?) into something close to what I want to achieve? I'm attaching a crude scetch for clarity. It feels like I somehow need to add three columns just replicating the upper, lower and av_events values for every "daily_events" entry, or something. I just can figure it out. Any feedback is mych appreciated.
Hi, I'm creating a query in splunk and need to search a field over a specific date. Field example; lastLogonTimestamp=01:00.21 PM, Sat 04/29/2023. So I want to search for anything where that field... See more...
Hi, I'm creating a query in splunk and need to search a field over a specific date. Field example; lastLogonTimestamp=01:00.21 PM, Sat 04/29/2023. So I want to search for anything where that field is 2023 or over as my query will be running in 2024 and so on. Any advice would be appricated. Many Thanks
Is it possible to rearrange the list in the filter. what i mean by here is "Can i move business process enablment down amd move COE Veeva at the top"
I have created a dbx query which has 6 columns in select statement as shown below(chart 1),now I want to have two of the column in y-axis which is first_response and closure_time and x-axis as MONTH ... See more...
I have created a dbx query which has 6 columns in select statement as shown below(chart 1),now I want to have two of the column in y-axis which is first_response and closure_time and x-axis as MONTH as shown in below chart(chart 2) and the other 3 columns such as work_item_id,status and filed_against has to be a filter. chart 1: chart 2:      
Hi folks, [Current scenario] When a role is created with capabilities, I am receiving one event for the role creation and each added capability is generated as an event. For example, one role with... See more...
Hi folks, [Current scenario] When a role is created with capabilities, I am receiving one event for the role creation and each added capability is generated as an event. For example, one role with five capabilities will produce six events in total with similar 'ID'. Event for role created: 2023-04-20T16:08:05,290 INFO [ID] 1234567:user - Added IdentityType=Role Name=<Role Name>, ObjId=<Object Id>. Events for capability added: 2023-04-20T16:12:07,020 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. 2023-04-20T16:12:07,020 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. 2023-04-20T16:12:07,020 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. 2023-04-20T16:12:07,021 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id> 2023-04-20T16:12:07,021 INFO [ID] 1234567:user - Access Control change on ObjectType=<Object type>, Name=<Capability>, ObjId=<Object Id>. My SPL: index=test |eval Info=case(Type="Role" AND Action="Added",'User'." "."has created the role named ".'Name'." with the following capabilities: ".'Capabilities') In the above I need the values of the five capability in the field(Capabilities). [Requirement] Any idea on how to include all the capability based on ID into a field called 'Capabilities'? Note:I dont want to use 'stats values()' directly in my main search.