All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  I am implementing a Splunk SOAR Connector and i was wondering if it is possible to write logs at different levels. There are different levels that can be configured on SystemHealth/Debugging bu... See more...
Hi,  I am implementing a Splunk SOAR Connector and i was wondering if it is possible to write logs at different levels. There are different levels that can be configured on SystemHealth/Debugging but the BaseConnector only has debug_print and error_print methods. How can I print INFO,  WARNING and TRACE logs on my connector? Thank Eduardo
I have a similar situation in my environment - making the changes to the restmap.conf prevents the App Launcher from loading  this is true  -  and I have version 9.1.2  where the fix must should have... See more...
I have a similar situation in my environment - making the changes to the restmap.conf prevents the App Launcher from loading  this is true  -  and I have version 9.1.2  where the fix must should have been fixed  
So...I have a HEC receiving JSON for phone calls using a custom sourcetype which parses calls from a field called timestampStr which looks like this: 2024-01-19 19:17:04.60313329 And uses TIME_FO... See more...
So...I have a HEC receiving JSON for phone calls using a custom sourcetype which parses calls from a field called timestampStr which looks like this: 2024-01-19 19:17:04.60313329 And uses TIME_FORMAT in the sourcetype with %Y-%m-%d %H:%M:%S.%9N And this sets _time to 2024-01-19 19:17:04.603 in the event. Which SEEMS RIGHT. However, if I then, as a user in Central time zone, ask for the calls 'in the last 15 minutes' (assuming I just made the call) it does not show up.  And, in fact, to find it, I have to use ALL_TIME (because the call seems to be 'in the future' based on my user, which feels dumb/weird to write because I know it's not technically true, but the best I can explain it). Here is what I mean (example): 1. i placed a call into the network at 1:17PM CENTRAL TIME (which is 19:17 UTC) 2. the application sent a json message of that call 3. it came in as above 4. i then ran a search in Splunk looking for that call in the LAST 15 MINUTES and it did not find it 5. however, i immediately asked for ALL_TIME and it did. I think my assumption is that if my USER SETTINGS are set to Central, it would 'correlate the calls' ok, but this appears not true or, rather, more likely, my understanding of what is going on is very poor. So I, once again, return to the community looking for answers.    
I haven't had much success with dynamically setting individual fields within an asset config. That being said, if you have a set list of URLs for that field, you can configure an asset per base_url y... See more...
I haven't had much success with dynamically setting individual fields within an asset config. That being said, if you have a set list of URLs for that field, you can configure an asset per base_url you need, and then pass in the asset name as a parameter.  You may also have some success if you edit the HTTP app itself and modify this functionality for your own use cases, but that's a bit more complicated.
You cannot do this with simple event search as you attempted.  To add fields (sometimes called "enrichment"), you need to use lookup command. (Or join with inputlookup and sacrifice performance.  But... See more...
You cannot do this with simple event search as you attempted.  To add fields (sometimes called "enrichment"), you need to use lookup command. (Or join with inputlookup and sacrifice performance.  But this doesn't apply in your case.)  Your question is really about wanting to match a wildcard at the beginning of a key, which lookup does not support.  Given your sample data, you don't seem to have a real choice.  So, you will have to take some performance penalty and perform string matches yourself. People (including myself) used to work around similar limitations in lookup with awkward mvzip-mvexpand-split sequences and the code is difficult to maintain.  Since 8.2, Splunk introduced a set of JSON functions that can represent data structure more expressively.  Here is one method:   | makeresults count=4 | streamstats count | eval number = case(count=1, 25, count=2, 39, count=3, 31, count=4, null()) | eval string1 = case(count=1, "I like blue berries", count=3, "The sea is blue", count=2, "black is all colors", count=4, "Theredsunisredhot") | table string1 | append [| inputlookup wildlookup.csv | tojson output_field=wildlookup | stats values(wildlookup) as wildlookup | eval wild = json_object() | foreach wildlookup mode=multivalue [ eval wild = json_set(wild, json_extract(<<ITEM>>, "colorkey"), <<ITEM>>)] | fields wild] | eventstats values(wild) as wild | where isnotnull(string1) | eval colors = json_keys(wild) | foreach colors mode=json_array [eval colorkey = mvappend(colorkey, if(match(string1, <<ITEM>>), <<ITEM>>, null()))] | mvexpand colorkey ``` in case of multiple matches ``` | foreach flagtype active [eval <<FIELD>> = json_extract(json_extract(wild, colorkey), "<<FIELD>>")] | eval flag = "KEYWORD FLAG" | table flagtype, flag, string1, colorkey   Note I stripped fields that are irrelevant to the resultant table.  I also made provisions to protect possible multiple color matches.  The output is flagtype flag string1 colorkey sticker KEYWORD FLAG I like blue berries blue   KEYWORD FLAG black is all colors   sticker KEYWORD FLAG The sea is blue blue tape KEYWORD FLAG Theredsunisredhot red Hope this helps.
argh!!!  Stupid me forgot the field argument.......  Now it works, sorry everyone.  
Hi Team, We have 800+ servers contains windows & Linux servers. How to get the data from Splunk with these details O/S version, Allocated Storage (GB), Utilized Storage (GB), Uptime %, CPU Utilizati... See more...
Hi Team, We have 800+ servers contains windows & Linux servers. How to get the data from Splunk with these details O/S version, Allocated Storage (GB), Utilized Storage (GB), Uptime %, CPU Utilization Peak %, CPU Utilization Avg %,  with the help of SPL query . Can you please help us on this requirement. Thanks, Raghunadha.
Can someone explain to me where the attrs argument pulls its attributes from? Originally I thought it was essentially the "-Properties" flag from Get-ADuser and I would be able to use those propertie... See more...
Can someone explain to me where the attrs argument pulls its attributes from? Originally I thought it was essentially the "-Properties" flag from Get-ADuser and I would be able to use those properties but whenever I try it says "External search command 'ldapsearch' returned error code 1. Script output = "error_message=Invalid attribute types in attrs list: PasswordExpirationDate "." Where is the attrs list? How can I define more attrs?
Hi Kevinmabini, the ingestion flow should not be affected by the upgrade. Could you open a support case? So that we can then take a close look at the stack and identify the issue. Thanks!
Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Recei... See more...
Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Receiver to collect application traces via gRPC. I used https://github.com/signalfx/splunk-otel-collector-chart to deploy the collector, I also enabled the OTLP receiver and added a new pipeline to the agent config. However, I struggle to understand how to send traces to the collector. As I see in k8s, there are many agents deployed, one for each node   $kubectl get pods --namespace splunk NAME READY STATUS RESTARTS AGE splunk-otel-collector-agent-286bf 1/1 Running 0 172m splunk-otel-collector-agent-2cp2k 1/1 Running 0 172m splunk-otel-collector-agent-2gbhh 1/1 Running 0 172m splunk-otel-collector-agent-44ts5 1/1 Running 0 172m splunk-otel-collector-agent-6ngvz 1/1 Running 0 173m splunk-otel-collector-agent-cpmtg 1/1 Running 0 172m splunk-otel-collector-agent-dfx8v 1/1 Running 0 171m splunk-otel-collector-agent-f4trw 1/1 Running 0 172m splunk-otel-collector-agent-g85cw 1/1 Running 0 172m splunk-otel-collector-agent-gz9ch 1/1 Running 0 172m splunk-otel-collector-agent-hjbmt 1/1 Running 0 172m splunk-otel-collector-agent-lttst 1/1 Running 0 172m splunk-otel-collector-agent-lzz4f 1/1 Running 0 172m splunk-otel-collector-agent-mcgc8 1/1 Running 0 173m splunk-otel-collector-agent-snqg8 1/1 Running 0 173m splunk-otel-collector-agent-t2gg8 1/1 Running 0 171m splunk-otel-collector-agent-tlsfd 1/1 Running 0 172m splunk-otel-collector-agent-tr5qg 1/1 Running 0 172m splunk-otel-collector-agent-vn2vr 1/1 Running 0 172m splunk-otel-collector-agent-xxxmr 1/1 Running 0 173m splunk-otel-collector-k8s-cluster-receiver-6b8f85b9-r5kft 1/1 Running 0 9h   I thought I need somehow send trace requests to one of this agents, but I don't see any ingresses or services deployed so that my application can use a DNS name for the collector.   $kubectl get services --namespace splunk No resources found in splunk namespace. $kubectl get ingresses --namespace splunk No resources found in splunk namespace.   Does it mean I have to add some ingresses/svcs by myself, and Splunk otel-collector helm charts don't include them? Do you have any recommendations on how I can configure this collector to be able to receive traces from applications from other pods in other namespaces using gRPC requests? It would be nice if I can have one URL that automatically gets routed to the collector agents..
Hi,   I have the below string and I'm trying to extract out the downstream status code by using this expression.  I used to do this a long time ago but it appears those brain cells have aged out. ... See more...
Hi,   I have the below string and I'm trying to extract out the downstream status code by using this expression.  I used to do this a long time ago but it appears those brain cells have aged out.   Regex that works in regex 101 but not Splunk   rex "DownstreamStatus..(?<dscode>\d+)"|stats count by dscode   String   {"ClientAddr":"blah","ClientHost":"blah","ClientPort":"50721","ClientUsername":"-","DownstreamContentSize":11,"DownstreamStatus":502,"Duration":179590376953,"OriginContentSize":11,"OriginDuration":179590108721,"OriginStatus":502,"Overhead":268232,    
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf [MonitorNoHandle://<path>] * This input intercepts file writes to the specific file. It appears this monitor config does not rea... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf [MonitorNoHandle://<path>] * This input intercepts file writes to the specific file. It appears this monitor config does not read the file itself but only intercepts what is about to be written to the file.  Your image shows last modified as Jan 4th which is your stated last ingest. I think your configuration will only capture future content and not existing content. 
Hi team, I'm trying to set up the integration between Jamf Protect and Splunk according to the steps provided in the following link: Jamf Protect Documentation - Splunk Integration When I follow t... See more...
Hi team, I'm trying to set up the integration between Jamf Protect and Splunk according to the steps provided in the following link: Jamf Protect Documentation - Splunk Integration When I follow the steps under the "Testing the Event Collector Token" heading, specifically the part that says "Using the values obtained in step 1, execute the following command:", I can see the logs sent from my local machine on the Splunk search head, but I can't see the JamfPro logs coming from other clients. However, I can see the logs when I use curl to send them. Additionally, when I open tcp dump on the heavy forwarder to check the logs, I can see that the logs are being received, but I can't see them when searching. What could be the reason for this? Furthermore, where can I check the error logs from the command line to examine any issues? Thanks
Ok this is ugly but should be functionable.  After researching I wasn't able to find a method to dynamically assign the default value, but you can default to the first value of the search result. So... See more...
Ok this is ugly but should be functionable.  After researching I wasn't able to find a method to dynamically assign the default value, but you can default to the first value of the search result. Source Table Name Value Name 01 Value 01 Name 02 value 02 Name 03 value03 Name 04 Value04   Set your data source to pull from a search designed like the following - modify for your needs. | inputlookup input_test_values.csv | stats values(Value) as Value | format | rename search as Value | eval Name="All" | table Name Value | append [| inputlookup input_test_values.csv | table Name Value ] Search Output: Name Value All ( ( ( Value="Value 01" OR Value="Value04" OR Value="value 02" OR Value="value03" ) ) ) Name 01 Value 01 Name 02 value 02 Name 03 value03 Name 04 Value04   Your inputs config panel to the right should be able to select "First Value" as the default.  The search as written can display "All" in the drop down while dynamically building the first value based upon the contents of the lookup no matter how many times it is edited.   The way I have written makes the first value very easy to drop into a subsequent search string.  Your needs may vary so you can play with the format command to get that output as you need it specifically.
Hello, good day community, I have a problem and I hope you can help me, I need to configure an asset of the http app to make a get request, when configuring it in the asset settings tab there is a fi... See more...
Hello, good day community, I have a problem and I hope you can help me, I need to configure an asset of the http app to make a get request, when configuring it in the asset settings tab there is a field called base_url, this is mandatory to fill out , the detail is that I need that base url dynamic, since I am going to take it from the artifacts through a flow, each url is different, until now I have not been able to solve it, I hope for your help, thank you
Thanks Dan!  That worked perfectly just as you provided.
@SplunkExplorer wrote: Hi Splunkers, I have a doubt about setting for Splunk Enterprise Security. As usual when I put a question here, let me share a minimal of context and assumption. Environm... See more...
@SplunkExplorer wrote: Hi Splunkers, I have a doubt about setting for Splunk Enterprise Security. As usual when I put a question here, let me share a minimal of context and assumption. Environment: A completely on prem Splunk Enterprise (no Slunk Cloud SaaS). Currently, only one SH Clustered indexers Task:  Install and configure a SH with Splunk Enterprise Security. Assumption: I know the full installation procedure (doc + Splunk Enterprise Admin course) I know how to manage a cluster environment (doc + Architect course). For example, I know that if I have to set a Splunk instance as SH I can use, from CLI:   > splunk edit cluster-config -mode searchhead -manager_uri https://<manager node address> -secret <cluster secret>     Questions: This syntax is still valid to add a SH with ES installed on it? The doubt is if the ES presence should lead me to use a different approach to tell "Hey, SH wth ES: indexers to query are those". SH with ES component should be  add as single SH (so, decoupled from already existing SH) or should I create a SH Cluster with normal SH + ES ES? @SplunkExplorer wrote: Hi Splunkers, I have a doubt about setting for Splunk Enterprise Security. As usual when I put a question here, let me share a minimal of context and assumption. Environment: A completely on prem Splunk Enterprise (no Slunk Cloud SaaS). Currently, only one SH Clustered indexers Task:  Install and configure a SH with Splunk Enterprise Security. Assumption: I know the full installation procedure (doc + Splunk Enterprise Admin course) I know how to manage a cluster environment (doc + Architect course). For example, I know that if I have to set a Splunk instance as SH I can use, from CLI:   > splunk edit cluster-config -mode searchhead -manager_uri https://<manager node address> -secret <cluster secret>     Questions: This syntax is still valid to add a SH with ES installed on it? The doubt is if the ES presence should lead me to use a different approach to tell "Hey, SH wth ES: indexers to query are those". SH with ES component should be  add as single SH (so, decoupled from already existing SH) or should I create a SH Cluster with normal SH + ES ES? Check DM. 
Hi @Sikha.Singh,  I just had a read of the ticket. They shared some documentation and I see if what you were looking for was not there, then they suggested you submit this as a feature request, whi... See more...
Hi @Sikha.Singh,  I just had a read of the ticket. They shared some documentation and I see if what you were looking for was not there, then they suggested you submit this as a feature request, which you can do here: https://community.appdynamics.com/t5/Idea-Exchange/idb-p/ideas
| makeresults | eval tmp="Thu 1/18/2024 @ 06:52:30.918 PM UTC 00.000.00.000 (00.000.000.001, 00.000.00.01, 00.000.00.03) > 00.000.00.0:0000 \"PUT /uri/query/here HTTP/1.1\" - 1270 200 3466 https-op... See more...
| makeresults | eval tmp="Thu 1/18/2024 @ 06:52:30.918 PM UTC 00.000.00.000 (00.000.000.001, 00.000.00.01, 00.000.00.03) > 00.000.00.0:0000 \"PUT /uri/query/here HTTP/1.1\" - 1270 200 3466 https-openssl-nio-00.000.00.0-000-exec-15 \"hxxps://url.splunk.com/\" \"user_agent\" - - - -" | rex field=tmp "^(?<timestamp>\w+\s\d+\/\d+\/\d+\s\@\s\d+:\d+:\d+\.\d+\s\w+\s\w+)\s(?<remote_hostname>\S+)\s\((?<x_forwarded_for>[^\)]+).*$" | table tmp timestamp remote_hostname x_forwarded_for | eval x_forwarded_for=split(replace(x_forwarded_for,"\s",""),",") Hello, This will auto extract a variable number of x-forwarded-for addresses and place into a multi value field. 
I am trying to replace default value of drop down with all the values from a column in lookup table Example: Lookup table  Name log_group Name 1 Log1 Name 2 log 2 Name 3 log3  ... See more...
I am trying to replace default value of drop down with all the values from a column in lookup table Example: Lookup table  Name log_group Name 1 Log1 Name 2 log 2 Name 3 log3   I need drop down default taken as log1,log2,log3