All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

To define a relative metric path in AppDynamics, you can do the following: Hover over a metric in the Metric Browser to get the full metric path Right-click on the metric and select... See more...
To define a relative metric path in AppDynamics, you can do the following: Hover over a metric in the Metric Browser to get the full metric path Right-click on the metric and select Copy Full Path Truncate the leftmost part of the full metric path Use the category in the Metric Selection window as the first segment of the relative metric path Truncate everything from the full metric path that comes before that segment    Please refer doc: https://docs.appdynamics.com/appd/23.x/latest/en/appdynamics-essentials/dashboards-and-reports/custom-dashboards/widgets/use-wildcards-in-metric-definitions Also, Please make sure that your select the respective correct entity from "Affect Entities" tab. 
1. There is no such thing as "just raw data" even if a bucket is not searchable it still retains at least default metadata fields and fields extracted in index time. 2. If you want the data to not b... See more...
1. There is no such thing as "just raw data" even if a bucket is not searchable it still retains at least default metadata fields and fields extracted in index time. 2. If you want the data to not be shared across sites why not just make two separate clusters? 3. You can't differentiate (site) RF/SF between indexes. You can only enable/disable replication altogether for an index. 4. https://docs.splunk.com/Documentation/Splunk/9.2.2/Indexer/Multisitearchitecture#Multisite_searching_and_search_affinity When there are no primaries in the site for which you have affinity set SH will reach for primaries to another site. That's by design. See p. 2.
Yes, if it can be done with stats that would be best. It is the Splunky way to do it.
I tailored the query to the appropriate fields and viola it worked.   I appreciate your efforts and thank you for your time.
Hi, These sound like APM (Application Performance Monitoring) use cases. If you can instrument the application that is making calls out to these 3 different APIs, then your application will show u... See more...
Hi, These sound like APM (Application Performance Monitoring) use cases. If you can instrument the application that is making calls out to these 3 different APIs, then your application will show up as an instrumented service on the APM service map, and the 3 different APIs will show up as “inferred services”. Inferred services won’t have as much detail as an instrumented service, but you will see response times, error codes, etc that are returned when your instrumented application makes calls to them. So, yes, out-of-the-box, you will see the overview of how your application is calling these 3 other services. https://docs.splunk.com/observability/en/gdi/get-data-in/application/otel-dotnet/instrumentation/instrument-dotnet-application.html#instrument-otel-dotnet-applications
Hello community, I'm encountering an issue while working with custom content in Splunk Security Essentials. I have created a custom content with this search :     ​index=windows sourcetype=WinE... See more...
Hello community, I'm encountering an issue while working with custom content in Splunk Security Essentials. I have created a custom content with this search :     ​index=windows sourcetype=WinEventLog | stats count(eval(action="success")) as successes count(eval(action="failure")) as failures by src | where successes>0 AND failures>100     However, when I navigate to the content under "Content -> Security Content" and attempt to save this as a scheduled search, the option "Save Scheduled Search" is not available. I noticed that in the pre-existing content, such as "Basic Brute Force," this option is present. Could you please advise on why this option might not be appearing for my custom content? Are there any additional steps or configurations required to enable this feature for custom content? Thank you for your assistance! Best regards   Splunk Security Essentials
Ideally, don't use join! You could try searching both indexes in your initial search and then "join" the data from the events using stats.
Have you tried using a left (also called outer) join vs and inner join. An inner join will only give you data where the node_id appears in both sets of data. A left join will give you all the results... See more...
Have you tried using a left (also called outer) join vs and inner join. An inner join will only give you data where the node_id appears in both sets of data. A left join will give you all the results from the base search joined where it matches in the sub search.
Is something like this what you are looking for? Set the time range picker to your desired range. index=windows EventCode=4624 Account_Name IN ("Larry","Curly","Moe") | eval Logon_Account_Name=mvind... See more...
Is something like this what you are looking for? Set the time range picker to your desired range. index=windows EventCode=4624 Account_Name IN ("Larry","Curly","Moe") | eval Logon_Account_Name=mvindex(Account_Name, 1) | table _time, ComputerName, Logon_Account_Name | sort _time  
I am not getting full data in output when combining 2 queries using join.  When I run first query individually, I get 143 results after dedup but upon joining, I am getting 71 results only. Whereas I... See more...
I am not getting full data in output when combining 2 queries using join.  When I run first query individually, I get 143 results after dedup but upon joining, I am getting 71 results only. Whereas I know that for remaining records, data is available when running 2nd query individually. How can I fix this? I am searching for records where pods got claimed and then searching for connected time using subsearch and need output of all columns in tabular format.   index=aws-cpe-scl source=*winserver* "methodPath=POST:/scl/v1/equipment/router/*/claim/pods" responseJson "techMobile=true" | rex "responseJson=(?<json>.*)" | eval routerMac = routerMac | eval techMobile = techMobile | eval status = status | spath input=json path=claimed{}.boxSerialNumber output=podSerialNumber | spath input=json path=claimed{}.locationId output=locationId | eval node_id = substr(podSerialNumber, 0, 10) | eval winClaimTime=strftime(_time,"%m/%d/%Y %H:%M:%S") | table winClaimTime, accountNumber, routerMac, node_id, locationId, status, techMobile | dedup routerMac, node_id sortby winClaimTime | join type=inner node_id [ search index=aws-cpe-osc ConnectionAgent "Node * connected:" model=PP203X | rex field=_raw "Node\s(?<node_id>\w+)\sconnected" | eval nodeFirstConnectedTime=strftime(_time,"%m/%d/%Y %H:%M:%S") | table nodeFirstConnectedTime, node_id | dedup node_id sortby nodeFirstConnectedTime] | table winClaimTime, accountNumber, routerMac, node_id, locationId, status, techMobile, nodeFirstConnectedTime
Can we apply following example on UF? Keep specific events and discard the rest https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_t... See more...
Can we apply following example on UF? Keep specific events and discard the rest https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest The answer is no. The example is for any non-UF instance. For UF you can modify the example Edit props.conf and add the following: [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing Edit transforms.conf and add the following: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \[sshd\] DEST_KEY = _TCP_ROUTING FORMAT = <valid-tcpoutgroup(s)> Or Edit props.conf and add the following: [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing Edit transforms.conf and add the following: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \[sshd\] DEST_KEY = queue FORMAT = parsingQueue  
Version 9.2.2 is available from the main Download page at https://www.splunk.com/en_us/download/splunk-enterprise.html  
Try removing the quotes from the file path. Check splunkd.log for errors relating to that input.
Effectively I want to comb through the windows event logs to determine logon dates and times for a specific user(s) and output those entries into a table with username, date and time. We have a windo... See more...
Effectively I want to comb through the windows event logs to determine logon dates and times for a specific user(s) and output those entries into a table with username, date and time. We have a windows index and we want to query the last seven days and the number of logins for a given user. I would imagine it'd be fairly simple to do, I just don't SPL. This is why I engaged the brain trust online in this forum. I don't splunk as a day job, so I'm not familiar with the intricacies with SPL. In short, give all entries from windows security logs for the last seven days from the windows index for a specific user with event ID 4624. Thank you.
I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDem... See more...
I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDemandScan_Activity.log] disable=0 index=winlogs sourcetype=WinEventLog:AntiVirus start_from=0 current_only=0 checkpointInterval = 5 renderXml=false   My question is: Is the stanza written correctly? When I do a search I am not seeing anything.
where can I download Splunk Enterprise 9.2.2?  I have ver: 9.2.1 and it has a Vulnerability Here is the Description:  The version of Splunk installed on the remote host is prior to tested version. ... See more...
where can I download Splunk Enterprise 9.2.2?  I have ver: 9.2.1 and it has a Vulnerability Here is the Description:  The version of Splunk installed on the remote host is prior to tested version. It is, therefore, affected by a vulnerability as referenced in the SVD-2024-0703 advisory
This is a Splunk forum.  No one here knows what your data source looks like. To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrat... See more...
This is a Splunk forum.  No one here knows what your data source looks like. To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Please illustrate full message.  The look of the fragment suggest your source is actually JSON, something like   {"message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0... See more...
Please illustrate full message.  The look of the fragment suggest your source is actually JSON, something like   {"message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257", "foo":"bar"}   Is this correct?  Using regex directly on structured data is strongly discouraged as any regex is doomed to be fragile. If the JSON is raw event, Splunk would have already extracted a field called "message".  Start from this field instead.  This field also is structured as KV pairs.  Use kv aka extract instead of regex.   | rename _raw as temp, message as _raw | kv kvdelim=": " pairdelim="," | rename _raw as message, temp as _raw | fields fuel   Your sample data would have given fuel _raw _time 0 {"message":"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257", "foo":"bar"} 2024-07-17 09:06:35 Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw = "{\"message\":\"journey::cook_client: fan: 0, auger: 0, glow_v: 36, glow: false, fuel: 0, cavity_temp: 257\", \"foo\":\"bar\"}" | spath ``` data emulation above ```    
  A. The search results are shown below. B. My goals are as follows 1. site1's SH wants to retrieve only the data that site1's indexer has. 2. site2's SH wants to retrieve only the data tha... See more...
  A. The search results are shown below. B. My goals are as follows 1. site1's SH wants to retrieve only the data that site1's indexer has. 2. site2's SH wants to retrieve only the data that site2's indexer has. 3. site1's indexer stores RAW data from site1 and site2. 4. site2's indexer stores only site2's RAW data.   C. Is it possible to configure the following structure?   D.  server.conf Option 1. On site1_SH, there is no difference between the behavior when server.conf is set to site=site0 and when it is set to site=site1. 2. On site2_SH, there is no difference in behavior between setting server.conf to site=site0 and site=site2.  
Hi All, Hope this message finds you well. I have installed splunk on-prem on a linux box as a splunk user and have given proper permissions. The azure VM gets shutsdown automatically at around 11 ... See more...
Hi All, Hope this message finds you well. I have installed splunk on-prem on a linux box as a splunk user and have given proper permissions. The azure VM gets shutsdown automatically at around 11 pm everyday and there is no auto start. For time being we are manually starting the VM. My problem here is while installing the splunk instance, I have run the command enable boot-start and it was successful but the splunkd services does not start on its own.  Can anyone please suggest what can be done to fix it? Thanks in advance