All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, We have 800+ servers contains windows & Linux servers. How to get the data from Splunk with these details O/S version, Allocated Storage (GB), Utilized Storage (GB), Uptime %, CPU Utilizati... See more...
Hi Team, We have 800+ servers contains windows & Linux servers. How to get the data from Splunk with these details O/S version, Allocated Storage (GB), Utilized Storage (GB), Uptime %, CPU Utilization Peak %, CPU Utilization Avg %,  with the help of SPL query . Can you please help us on this requirement. Thanks, Raghunadha.
Can someone explain to me where the attrs argument pulls its attributes from? Originally I thought it was essentially the "-Properties" flag from Get-ADuser and I would be able to use those propertie... See more...
Can someone explain to me where the attrs argument pulls its attributes from? Originally I thought it was essentially the "-Properties" flag from Get-ADuser and I would be able to use those properties but whenever I try it says "External search command 'ldapsearch' returned error code 1. Script output = "error_message=Invalid attribute types in attrs list: PasswordExpirationDate "." Where is the attrs list? How can I define more attrs?
Hi Kevinmabini, the ingestion flow should not be affected by the upgrade. Could you open a support case? So that we can then take a close look at the stack and identify the issue. Thanks!
Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Recei... See more...
Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Receiver to collect application traces via gRPC. I used https://github.com/signalfx/splunk-otel-collector-chart to deploy the collector, I also enabled the OTLP receiver and added a new pipeline to the agent config. However, I struggle to understand how to send traces to the collector. As I see in k8s, there are many agents deployed, one for each node   $kubectl get pods --namespace splunk NAME READY STATUS RESTARTS AGE splunk-otel-collector-agent-286bf 1/1 Running 0 172m splunk-otel-collector-agent-2cp2k 1/1 Running 0 172m splunk-otel-collector-agent-2gbhh 1/1 Running 0 172m splunk-otel-collector-agent-44ts5 1/1 Running 0 172m splunk-otel-collector-agent-6ngvz 1/1 Running 0 173m splunk-otel-collector-agent-cpmtg 1/1 Running 0 172m splunk-otel-collector-agent-dfx8v 1/1 Running 0 171m splunk-otel-collector-agent-f4trw 1/1 Running 0 172m splunk-otel-collector-agent-g85cw 1/1 Running 0 172m splunk-otel-collector-agent-gz9ch 1/1 Running 0 172m splunk-otel-collector-agent-hjbmt 1/1 Running 0 172m splunk-otel-collector-agent-lttst 1/1 Running 0 172m splunk-otel-collector-agent-lzz4f 1/1 Running 0 172m splunk-otel-collector-agent-mcgc8 1/1 Running 0 173m splunk-otel-collector-agent-snqg8 1/1 Running 0 173m splunk-otel-collector-agent-t2gg8 1/1 Running 0 171m splunk-otel-collector-agent-tlsfd 1/1 Running 0 172m splunk-otel-collector-agent-tr5qg 1/1 Running 0 172m splunk-otel-collector-agent-vn2vr 1/1 Running 0 172m splunk-otel-collector-agent-xxxmr 1/1 Running 0 173m splunk-otel-collector-k8s-cluster-receiver-6b8f85b9-r5kft 1/1 Running 0 9h   I thought I need somehow send trace requests to one of this agents, but I don't see any ingresses or services deployed so that my application can use a DNS name for the collector.   $kubectl get services --namespace splunk No resources found in splunk namespace. $kubectl get ingresses --namespace splunk No resources found in splunk namespace.   Does it mean I have to add some ingresses/svcs by myself, and Splunk otel-collector helm charts don't include them? Do you have any recommendations on how I can configure this collector to be able to receive traces from applications from other pods in other namespaces using gRPC requests? It would be nice if I can have one URL that automatically gets routed to the collector agents..
Hi,   I have the below string and I'm trying to extract out the downstream status code by using this expression.  I used to do this a long time ago but it appears those brain cells have aged out. ... See more...
Hi,   I have the below string and I'm trying to extract out the downstream status code by using this expression.  I used to do this a long time ago but it appears those brain cells have aged out.   Regex that works in regex 101 but not Splunk   rex "DownstreamStatus..(?<dscode>\d+)"|stats count by dscode   String   {"ClientAddr":"blah","ClientHost":"blah","ClientPort":"50721","ClientUsername":"-","DownstreamContentSize":11,"DownstreamStatus":502,"Duration":179590376953,"OriginContentSize":11,"OriginDuration":179590108721,"OriginStatus":502,"Overhead":268232,    
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf [MonitorNoHandle://<path>] * This input intercepts file writes to the specific file. It appears this monitor config does not rea... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf [MonitorNoHandle://<path>] * This input intercepts file writes to the specific file. It appears this monitor config does not read the file itself but only intercepts what is about to be written to the file.  Your image shows last modified as Jan 4th which is your stated last ingest. I think your configuration will only capture future content and not existing content. 
Hi team, I'm trying to set up the integration between Jamf Protect and Splunk according to the steps provided in the following link: Jamf Protect Documentation - Splunk Integration When I follow t... See more...
Hi team, I'm trying to set up the integration between Jamf Protect and Splunk according to the steps provided in the following link: Jamf Protect Documentation - Splunk Integration When I follow the steps under the "Testing the Event Collector Token" heading, specifically the part that says "Using the values obtained in step 1, execute the following command:", I can see the logs sent from my local machine on the Splunk search head, but I can't see the JamfPro logs coming from other clients. However, I can see the logs when I use curl to send them. Additionally, when I open tcp dump on the heavy forwarder to check the logs, I can see that the logs are being received, but I can't see them when searching. What could be the reason for this? Furthermore, where can I check the error logs from the command line to examine any issues? Thanks
Ok this is ugly but should be functionable.  After researching I wasn't able to find a method to dynamically assign the default value, but you can default to the first value of the search result. So... See more...
Ok this is ugly but should be functionable.  After researching I wasn't able to find a method to dynamically assign the default value, but you can default to the first value of the search result. Source Table Name Value Name 01 Value 01 Name 02 value 02 Name 03 value03 Name 04 Value04   Set your data source to pull from a search designed like the following - modify for your needs. | inputlookup input_test_values.csv | stats values(Value) as Value | format | rename search as Value | eval Name="All" | table Name Value | append [| inputlookup input_test_values.csv | table Name Value ] Search Output: Name Value All ( ( ( Value="Value 01" OR Value="Value04" OR Value="value 02" OR Value="value03" ) ) ) Name 01 Value 01 Name 02 value 02 Name 03 value03 Name 04 Value04   Your inputs config panel to the right should be able to select "First Value" as the default.  The search as written can display "All" in the drop down while dynamically building the first value based upon the contents of the lookup no matter how many times it is edited.   The way I have written makes the first value very easy to drop into a subsequent search string.  Your needs may vary so you can play with the format command to get that output as you need it specifically.
Hello, good day community, I have a problem and I hope you can help me, I need to configure an asset of the http app to make a get request, when configuring it in the asset settings tab there is a fi... See more...
Hello, good day community, I have a problem and I hope you can help me, I need to configure an asset of the http app to make a get request, when configuring it in the asset settings tab there is a field called base_url, this is mandatory to fill out , the detail is that I need that base url dynamic, since I am going to take it from the artifacts through a flow, each url is different, until now I have not been able to solve it, I hope for your help, thank you
Thanks Dan!  That worked perfectly just as you provided.
@SplunkExplorer wrote: Hi Splunkers, I have a doubt about setting for Splunk Enterprise Security. As usual when I put a question here, let me share a minimal of context and assumption. Environm... See more...
@SplunkExplorer wrote: Hi Splunkers, I have a doubt about setting for Splunk Enterprise Security. As usual when I put a question here, let me share a minimal of context and assumption. Environment: A completely on prem Splunk Enterprise (no Slunk Cloud SaaS). Currently, only one SH Clustered indexers Task:  Install and configure a SH with Splunk Enterprise Security. Assumption: I know the full installation procedure (doc + Splunk Enterprise Admin course) I know how to manage a cluster environment (doc + Architect course). For example, I know that if I have to set a Splunk instance as SH I can use, from CLI:   > splunk edit cluster-config -mode searchhead -manager_uri https://<manager node address> -secret <cluster secret>     Questions: This syntax is still valid to add a SH with ES installed on it? The doubt is if the ES presence should lead me to use a different approach to tell "Hey, SH wth ES: indexers to query are those". SH with ES component should be  add as single SH (so, decoupled from already existing SH) or should I create a SH Cluster with normal SH + ES ES? @SplunkExplorer wrote: Hi Splunkers, I have a doubt about setting for Splunk Enterprise Security. As usual when I put a question here, let me share a minimal of context and assumption. Environment: A completely on prem Splunk Enterprise (no Slunk Cloud SaaS). Currently, only one SH Clustered indexers Task:  Install and configure a SH with Splunk Enterprise Security. Assumption: I know the full installation procedure (doc + Splunk Enterprise Admin course) I know how to manage a cluster environment (doc + Architect course). For example, I know that if I have to set a Splunk instance as SH I can use, from CLI:   > splunk edit cluster-config -mode searchhead -manager_uri https://<manager node address> -secret <cluster secret>     Questions: This syntax is still valid to add a SH with ES installed on it? The doubt is if the ES presence should lead me to use a different approach to tell "Hey, SH wth ES: indexers to query are those". SH with ES component should be  add as single SH (so, decoupled from already existing SH) or should I create a SH Cluster with normal SH + ES ES? Check DM. 
Hi @Sikha.Singh,  I just had a read of the ticket. They shared some documentation and I see if what you were looking for was not there, then they suggested you submit this as a feature request, whi... See more...
Hi @Sikha.Singh,  I just had a read of the ticket. They shared some documentation and I see if what you were looking for was not there, then they suggested you submit this as a feature request, which you can do here: https://community.appdynamics.com/t5/Idea-Exchange/idb-p/ideas
| makeresults | eval tmp="Thu 1/18/2024 @ 06:52:30.918 PM UTC 00.000.00.000 (00.000.000.001, 00.000.00.01, 00.000.00.03) > 00.000.00.0:0000 \"PUT /uri/query/here HTTP/1.1\" - 1270 200 3466 https-op... See more...
| makeresults | eval tmp="Thu 1/18/2024 @ 06:52:30.918 PM UTC 00.000.00.000 (00.000.000.001, 00.000.00.01, 00.000.00.03) > 00.000.00.0:0000 \"PUT /uri/query/here HTTP/1.1\" - 1270 200 3466 https-openssl-nio-00.000.00.0-000-exec-15 \"hxxps://url.splunk.com/\" \"user_agent\" - - - -" | rex field=tmp "^(?<timestamp>\w+\s\d+\/\d+\/\d+\s\@\s\d+:\d+:\d+\.\d+\s\w+\s\w+)\s(?<remote_hostname>\S+)\s\((?<x_forwarded_for>[^\)]+).*$" | table tmp timestamp remote_hostname x_forwarded_for | eval x_forwarded_for=split(replace(x_forwarded_for,"\s",""),",") Hello, This will auto extract a variable number of x-forwarded-for addresses and place into a multi value field. 
I am trying to replace default value of drop down with all the values from a column in lookup table Example: Lookup table  Name log_group Name 1 Log1 Name 2 log 2 Name 3 log3  ... See more...
I am trying to replace default value of drop down with all the values from a column in lookup table Example: Lookup table  Name log_group Name 1 Log1 Name 2 log 2 Name 3 log3   I need drop down default taken as log1,log2,log3
While upgrading from 5.0 to 7.3.0 facing this issue while setting up the account we are facing this error! Can someone help how to fix this issue?
SPL is a bit wonky but got results in the final format you were looking for, I'm curious how this SPL will perform against your live data.       | makeresults | eval _raw="{ \"a.com\": [ { \"... See more...
SPL is a bit wonky but got results in the final format you were looking for, I'm curious how this SPL will perform against your live data.       | makeresults | eval _raw="{ \"a.com\": [ { \"yahoo.com\":\"10ms\",\"trans-id\": \"x1\"}, { \"google.com\":\"20ms\",\"trans-id\": \"x2\"} ], \"b.com\": [ { \"aspera.com\":\"30ms\",\"trans-id\": \"x3\"}, { \"arista.com\":\"40ms\",\"trans-id\": \"x4\"} ], \"trans-id\":\"m1\", \"duration\":\"33ms\" }" ``` start parsing json object ``` | fromjson _raw | foreach *.* [ | eval url_json=mvappend( url_json, case( mvcount('<<FIELD>>')==1, if(isnotnull(json('<<FIELD>>')), json_set('<<FIELD>>', "url", "<<FIELD>>"), null()), mvcount('<<FIELD>>')>1, mvmap('<<FIELD>>', if(isnotnull(json('<<FIELD>>')), json_set('<<FIELD>>', "url", "<<FIELD>>"), null())) ) ) ] | fields + _time, url_json, "trans-id", duration | rename "trans-id" as "top_trans-id" | fields - _raw | mvexpand url_json | fromjson url_json | fields - url_json | foreach *.* [ | eval sub_url=if( isnotnull('<<FIELD>>') AND isnull(sub_url), "<<FIELD>>", 'sub_url' ), sub_duration=if( isnotnull('<<FIELD>>') AND isnull(sub_duration), '<<FIELD>>', 'sub_duration' ) ] | rename "trans-id" as "sub_trans-id" | fields + _time, "top_trans-id", url, duration, sub_duration, sub_url, sub_trans-id | rename "top_trans-id" as "trans-id"      Final output:   There are some pretty big assumptions here, biggest being that the keys of the _raw json will have fields with the "*.*" format or a dot in the fieldname (domain names)
You should be able to build a report around the REST command | rest splunk_server=local /servicesNS/-/-/alerts/fired_alerts
I solved it by using the max_match option in the rex command. The x-forwarded-fors were extracted into a multivalue field x_forwarded_single | rex field=_raw "^(?P<timestamp>\w+\s\d+\/\d+\/\d+\s.\s... See more...
I solved it by using the max_match option in the rex command. The x-forwarded-fors were extracted into a multivalue field x_forwarded_single | rex field=_raw "^(?P<timestamp>\w+\s\d+\/\d+\/\d+\s.\s\d+:\d+:\d+\.\d+\s\w+\s\w+)\s(?P<remote_hostname>\S+)\s\((?P<x_forwarded_for>[^\)]*)\)\s\>\s(?P<local_ip_address>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(?P<local_port>[\d\-]+)\s\"(?<request>[^\"]+)\"\s(?<request_body_length>\S+)\s(?<time_milli>\S+)\s(?<http_status>\S+)\s(?<bytes_sent>\S+)\s(?<request_thread_name>\S+)\s\"(?<referer>[^\"\s]*)\"\s\"(?<user_agent>[^\"]*)\"\s(?<remote_user>\S+)\s(?<user_session_id>\S+)\s(?<username>\S+)\s(?<session_tracker>\S+)" | rex field=request "(?<http_method>\w*)\s+(?<url>[^ ]*)\s+(?<http_version>[^\"]+)[^ \n]*" | rex field=url "(?<uri_path>[^?]+)(?:(?<uri_query>\?.*))?" | rex field=x_forwarded_for max_match=3 "(?<x_forwarded_single>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
So there's a couple ways you could approach this: 1. Less elegant solution - you can add in a decision/filter block to the existing playbook that runs every time to check for the tag. If the tag mat... See more...
So there's a couple ways you could approach this: 1. Less elegant solution - you can add in a decision/filter block to the existing playbook that runs every time to check for the tag. If the tag matches continue, otherwise, end the playbook. 2. If you truly don't want the playbook to run at all, I think you'll need to make a parent playbook with a decision or filter, and call the subplaybook from there. The parent playbook will still run on all events with the label, but the playbook you want to run wouldn't run at all. I added on the screenshot for the datapath you'd need to match the container tags.
Hi @sekhar463 , which user are you using to run Splunk, has this user the grants to read this file? please check that the path of the file is correct, runing the dir command in a cmd window. Ciao.... See more...
Hi @sekhar463 , which user are you using to run Splunk, has this user the grants to read this file? please check that the path of the file is correct, runing the dir command in a cmd window. Ciao. Giuseppe