All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When you talk about Notable event suppression, I assume you are talking about the Notable Event Suppression action in the Incident Review. If you want to whitelist/blacklist certain assets, then you... See more...
When you talk about Notable event suppression, I assume you are talking about the Notable Event Suppression action in the Incident Review. If you want to whitelist/blacklist certain assets, then you should add the lookup logic to the correlation search that has caused the notable event in the first place. You cannot add lookup logic to the event type search ES creates for the suppression logic.
The normal way to get data from windows machines is to install the universal forwarder on the machine and pretty much the rest happens as magic. https://www.splunk.com/en_us/blog/learn/splunk-univer... See more...
The normal way to get data from windows machines is to install the universal forwarder on the machine and pretty much the rest happens as magic. https://www.splunk.com/en_us/blog/learn/splunk-universal-forwarder.html Also, you should install the TAs (Technical Add On) for Windows https://splunkbase.splunk.com/app/742 and then you will have the data in Splunk in a way that can be easily digested.  
So there are a lot of questions to ask, as you state just linux. Is it debian or centos/redhat based? If it's redhat, are you using systemd? https://docs.splunk.com/Documentation/Splunk/9.2.2/Admin/... See more...
So there are a lot of questions to ask, as you state just linux. Is it debian or centos/redhat based? If it's redhat, are you using systemd? https://docs.splunk.com/Documentation/Splunk/9.2.2/Admin/ConfigureSplunktostartatboottime When you run  [sudo] $SPLUNK_HOME/bin/splunk enable boot-start -user splunk what sort of output do you get? Keep in mind if you are using systemd there is an entire section in the documentation that goes over fighting that lovely beast.  Have you checked /opt/splunk/var/log/splunk/splunkd.log to see if there are any issues with it attempting to autostart? Sometimes things such as permissions issues can also affect it. Are you able to manually start splunk as the splunk user and it boots up fine?        
Actually, I forgot to mention in the main post.  I tried “spath”, which is not extracting as expected (extracting other values for one field)
As I understand it (haven't tested though) - if you enable smartstore while having reduced buckets, cachemgr will happily upload them to smartstore and will keep them in reduced state fetching them t... See more...
As I understand it (haven't tested though) - if you enable smartstore while having reduced buckets, cachemgr will happily upload them to smartstore and will keep them in reduced state fetching them to the cache whenever they are needed but not rebuilding indexes so searching will be slow. But if you have non-reduced buckets in smartstore you can't reduce them anymore because you can't modify contents of the remote storage. You only fetch a copy to cache when needed.
To define a relative metric path in AppDynamics, you can do the following: Hover over a metric in the Metric Browser to get the full metric path Right-click on the metric and select... See more...
To define a relative metric path in AppDynamics, you can do the following: Hover over a metric in the Metric Browser to get the full metric path Right-click on the metric and select Copy Full Path Truncate the leftmost part of the full metric path Use the category in the Metric Selection window as the first segment of the relative metric path Truncate everything from the full metric path that comes before that segment    Please refer doc: https://docs.appdynamics.com/appd/23.x/latest/en/appdynamics-essentials/dashboards-and-reports/custom-dashboards/widgets/use-wildcards-in-metric-definitions Also, Please make sure that your select the respective correct entity from "Affect Entities" tab. 
1. There is no such thing as "just raw data" even if a bucket is not searchable it still retains at least default metadata fields and fields extracted in index time. 2. If you want the data to not b... See more...
1. There is no such thing as "just raw data" even if a bucket is not searchable it still retains at least default metadata fields and fields extracted in index time. 2. If you want the data to not be shared across sites why not just make two separate clusters? 3. You can't differentiate (site) RF/SF between indexes. You can only enable/disable replication altogether for an index. 4. https://docs.splunk.com/Documentation/Splunk/9.2.2/Indexer/Multisitearchitecture#Multisite_searching_and_search_affinity When there are no primaries in the site for which you have affinity set SH will reach for primaries to another site. That's by design. See p. 2.
Yes, if it can be done with stats that would be best. It is the Splunky way to do it.
I tailored the query to the appropriate fields and viola it worked.   I appreciate your efforts and thank you for your time.
Hi, These sound like APM (Application Performance Monitoring) use cases. If you can instrument the application that is making calls out to these 3 different APIs, then your application will show u... See more...
Hi, These sound like APM (Application Performance Monitoring) use cases. If you can instrument the application that is making calls out to these 3 different APIs, then your application will show up as an instrumented service on the APM service map, and the 3 different APIs will show up as “inferred services”. Inferred services won’t have as much detail as an instrumented service, but you will see response times, error codes, etc that are returned when your instrumented application makes calls to them. So, yes, out-of-the-box, you will see the overview of how your application is calling these 3 other services. https://docs.splunk.com/observability/en/gdi/get-data-in/application/otel-dotnet/instrumentation/instrument-dotnet-application.html#instrument-otel-dotnet-applications
Hello community, I'm encountering an issue while working with custom content in Splunk Security Essentials. I have created a custom content with this search :     ​index=windows sourcetype=WinE... See more...
Hello community, I'm encountering an issue while working with custom content in Splunk Security Essentials. I have created a custom content with this search :     ​index=windows sourcetype=WinEventLog | stats count(eval(action="success")) as successes count(eval(action="failure")) as failures by src | where successes>0 AND failures>100     However, when I navigate to the content under "Content -> Security Content" and attempt to save this as a scheduled search, the option "Save Scheduled Search" is not available. I noticed that in the pre-existing content, such as "Basic Brute Force," this option is present. Could you please advise on why this option might not be appearing for my custom content? Are there any additional steps or configurations required to enable this feature for custom content? Thank you for your assistance! Best regards   Splunk Security Essentials
Ideally, don't use join! You could try searching both indexes in your initial search and then "join" the data from the events using stats.
Have you tried using a left (also called outer) join vs and inner join. An inner join will only give you data where the node_id appears in both sets of data. A left join will give you all the results... See more...
Have you tried using a left (also called outer) join vs and inner join. An inner join will only give you data where the node_id appears in both sets of data. A left join will give you all the results from the base search joined where it matches in the sub search.
Is something like this what you are looking for? Set the time range picker to your desired range. index=windows EventCode=4624 Account_Name IN ("Larry","Curly","Moe") | eval Logon_Account_Name=mvind... See more...
Is something like this what you are looking for? Set the time range picker to your desired range. index=windows EventCode=4624 Account_Name IN ("Larry","Curly","Moe") | eval Logon_Account_Name=mvindex(Account_Name, 1) | table _time, ComputerName, Logon_Account_Name | sort _time  
I am not getting full data in output when combining 2 queries using join.  When I run first query individually, I get 143 results after dedup but upon joining, I am getting 71 results only. Whereas I... See more...
I am not getting full data in output when combining 2 queries using join.  When I run first query individually, I get 143 results after dedup but upon joining, I am getting 71 results only. Whereas I know that for remaining records, data is available when running 2nd query individually. How can I fix this? I am searching for records where pods got claimed and then searching for connected time using subsearch and need output of all columns in tabular format.   index=aws-cpe-scl source=*winserver* "methodPath=POST:/scl/v1/equipment/router/*/claim/pods" responseJson "techMobile=true" | rex "responseJson=(?<json>.*)" | eval routerMac = routerMac | eval techMobile = techMobile | eval status = status | spath input=json path=claimed{}.boxSerialNumber output=podSerialNumber | spath input=json path=claimed{}.locationId output=locationId | eval node_id = substr(podSerialNumber, 0, 10) | eval winClaimTime=strftime(_time,"%m/%d/%Y %H:%M:%S") | table winClaimTime, accountNumber, routerMac, node_id, locationId, status, techMobile | dedup routerMac, node_id sortby winClaimTime | join type=inner node_id [ search index=aws-cpe-osc ConnectionAgent "Node * connected:" model=PP203X | rex field=_raw "Node\s(?<node_id>\w+)\sconnected" | eval nodeFirstConnectedTime=strftime(_time,"%m/%d/%Y %H:%M:%S") | table nodeFirstConnectedTime, node_id | dedup node_id sortby nodeFirstConnectedTime] | table winClaimTime, accountNumber, routerMac, node_id, locationId, status, techMobile, nodeFirstConnectedTime
Can we apply following example on UF? Keep specific events and discard the rest https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_t... See more...
Can we apply following example on UF? Keep specific events and discard the rest https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest The answer is no. The example is for any non-UF instance. For UF you can modify the example Edit props.conf and add the following: [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing Edit transforms.conf and add the following: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \[sshd\] DEST_KEY = _TCP_ROUTING FORMAT = <valid-tcpoutgroup(s)> Or Edit props.conf and add the following: [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing Edit transforms.conf and add the following: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \[sshd\] DEST_KEY = queue FORMAT = parsingQueue  
Version 9.2.2 is available from the main Download page at https://www.splunk.com/en_us/download/splunk-enterprise.html  
Try removing the quotes from the file path. Check splunkd.log for errors relating to that input.
Effectively I want to comb through the windows event logs to determine logon dates and times for a specific user(s) and output those entries into a table with username, date and time. We have a windo... See more...
Effectively I want to comb through the windows event logs to determine logon dates and times for a specific user(s) and output those entries into a table with username, date and time. We have a windows index and we want to query the last seven days and the number of logins for a given user. I would imagine it'd be fairly simple to do, I just don't SPL. This is why I engaged the brain trust online in this forum. I don't splunk as a day job, so I'm not familiar with the intricacies with SPL. In short, give all entries from windows security logs for the last seven days from the windows index for a specific user with event ID 4624. Thank you.
I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDem... See more...
I want Splunk to ingest my AV log. I made the following entry in the inputs.conf file: Note: The log file is a text file with no formatting. [monitor://C:ProgramData\'Endpoint Security'\logs\OnDemandScan_Activity.log] disable=0 index=winlogs sourcetype=WinEventLog:AntiVirus start_from=0 current_only=0 checkpointInterval = 5 renderXml=false   My question is: Is the stanza written correctly? When I do a search I am not seeing anything.