All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Seems to be working for rest of fields by not for CI_V2. Creating field value CI_V2="CI": "V2 . it should be CI_V2 = V2.  
Hi @pm2012  this is a decade old post, but this should give you some ideas..  https://community.splunk.com/t5/Getting-Data-In/How-do-I-tell-if-a-forwarder-is-down/m-p/10407  
Hello, Thanks for your assistance. I will accept your solution. Can you also comment below?   The *** groups of commands***, I meant ** group of searches*** , will use this term moving forward W... See more...
Hello, Thanks for your assistance. I will accept your solution. Can you also comment below?   The *** groups of commands***, I meant ** group of searches*** , will use this term moving forward When I checked "enable summary indexing" on a scheduled report, it automatically appended the following statement at the end of the searches | summaryindex spool=t uselb=t addtime=t index="summary" file="[filename].stash_new" name="test_ip" marker="hostname=\"https://test.com/\",report=\"test_ip\"" index=summary  report=test_ip | dedup sourcetype sourcetype is stash, while the original sourcetype is syslog I read the link you sent, it states that if I change the sourcetype, it will incur license usage:  sourcetypeSyntax: sourcetype=<string>Description: The name of the source type that you want to specify for the events. By specifying a sourcetype outside of stash, you will incur license usage.This option is not valid when output_format=hec.Default: stash The solution you suggested is: split the events so it won't have multivalues before the summary index.. Or can I split multivalues after summary index? Thanks
Hi All, Our scenario is like, in our AWS environment ,we want to collect our logs by using universal forwarder from our Linux, eks and windows server. But the thing here is we don't have internet i... See more...
Hi All, Our scenario is like, in our AWS environment ,we want to collect our logs by using universal forwarder from our Linux, eks and windows server. But the thing here is we don't have internet in our environment, can anyone please suggest a solution on how we can install this forwarder and use to forward our logs to centralize server for monitoring? Basically it's non routable environment And there are 3 resources from where we want to collect logs, Linux server Windows server Eks cluster 
We refer to the golden ticket attack, according to the Kerberos mechanism, a prerequisite for a service ticket request is a user ticket request (or renewal of an existing ticket). When this is not th... See more...
We refer to the golden ticket attack, according to the Kerberos mechanism, a prerequisite for a service ticket request is a user ticket request (or renewal of an existing ticket). When this is not the case and we do not see a corresponding prior login event, the user ticket is suspected to be forged or stolen from another machine. So the logic of the detection is that one of the following corresponding events does not occur before the service ticket request (Eventcode=4769): 1. user ticket (TGT) request (Eventcode=4768). 2. ticket renewal request (Eventcode=4770). 3. Login event (Eventcode=4624).
Is there any prebuilt search (like rest command) to find the number of triggered alerts for a particular dashboard?  if not, can we create a search which helps in identifying which triggered alert i... See more...
Is there any prebuilt search (like rest command) to find the number of triggered alerts for a particular dashboard?  if not, can we create a search which helps in identifying which triggered alert is associated with which dashboard for a specific time period.
Hi SMEs,   I would like to create an alert on Splunk ES which should trigger if any of the Heavy forwarder reboot or shutdown by someone. thanks in advance 
Thank you @ITWhisperer and @gcusello. It is working now. If anything more is required, I will get back. Thanks again.
Third option of editing in simple XML still works as of today! however the first option no longer does, I get a javascript error. Not to mention the forced xml v=1.0 issue will deprecate this solutio... See more...
Third option of editing in simple XML still works as of today! however the first option no longer does, I get a javascript error. Not to mention the forced xml v=1.0 issue will deprecate this solution option soon. 
Those settings belong in props.conf on the indexers and heavy forwarders. BTW, the TIME_PREFIX setting should describe what comes *before* the timestamp and not the timestamp itself. The inputs.con... See more...
Those settings belong in props.conf on the indexers and heavy forwarders. BTW, the TIME_PREFIX setting should describe what comes *before* the timestamp and not the timestamp itself. The inputs.conf file should look a little like this: [monitor:///path/to/file] index = foo sourcetype = mysourcetype
Hi @bennett_riegel  1. did you download the app as a tar file from the Splunkbase (the file name looks like "splunk-security-essentials_371.tgz") 2. on your Splunk, pls go to  (left side Apps d... See more...
Hi @bennett_riegel  1. did you download the app as a tar file from the Splunkbase (the file name looks like "splunk-security-essentials_371.tgz") 2. on your Splunk, pls go to  (left side Apps dropdown) Apps -- - > Manage Apps --- > Install app from file. 3. then select the tar file("splunk-security-essentials_371.tgz") and load it, it will install smoothly.. 4. then Splunk restart will be required. 
Hi @AL3Z .. Could you pls edit the sample log(remove all important things like ip address, usernames, any sensitive info), thanks.  the props and transforms... it requires some homework from your si... See more...
Hi @AL3Z .. Could you pls edit the sample log(remove all important things like ip address, usernames, any sensitive info), thanks.  the props and transforms... it requires some homework from your side. I will try my best to create and suggest you back, thanks. 
Hi @R15 .. this search runs fine actually. may we know your remaining portions of the search(after calculating the avg, how do you have handle the avg values?!?!)  if you provide a screenshot, that ... See more...
Hi @R15 .. this search runs fine actually. may we know your remaining portions of the search(after calculating the avg, how do you have handle the avg values?!?!)  if you provide a screenshot, that would be of great help, thanks.     
The question is understandable. The answer however is that you can't do that reliably with splunk's built-in functionalities. Splunk processes one event at a time and doesn't keep any state which cou... See more...
The question is understandable. The answer however is that you can't do that reliably with splunk's built-in functionalities. Splunk processes one event at a time and doesn't keep any state which could be carried from one event to another. You can sometimes do some magic with cloning events and cutting different parts from each copy but that hack is ugly, non-scallable and inefficient.
I was building a new search and started getting this error with various functions. I simplified my search down to something straight out of the documentation to make sure I wasn't missing something s... See more...
I was building a new search and started getting this error with various functions. I simplified my search down to something straight out of the documentation to make sure I wasn't missing something silly, but still get the error even with this:  index=* | eval c=avg(1, 2, 3) What's going on?
Getting a ton of these Telemetry errors in Event Log of a windows server with at UF installed. They started a few days ago. What could be causing them? No changes have been made to the UF or splunk i... See more...
Getting a ton of these Telemetry errors in Event Log of a windows server with at UF installed. They started a few days ago. What could be causing them? No changes have been made to the UF or splunk infrastructure recently. 1.6987038408387303e+09 error exporterhelper/queued_retry.go:183 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "name": "signalfx", "error": "Permanent error: \"HTTP/2.0 401 Unauthorized\\r\\nContent-Length: 0\\r\\nDate: Mon, 30 Oct 2023 22:10:40 GMT\\r\\nServer: istio-envoy\\r\\nWww-Authenticate: Basic realm=\\\"Splunk\\\"\\r\\nX-Envoy-Upstream-Service-Time: 5\\r\\n\\r\\n\"", "dropped_items": 50} go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send /builds/o11y-gdi/splunk-otel-collector-releaser/.go/pkg/mod/go.opentelemetry.io/collector@v0.53.0/exporter/exporterhelper/queued_retry.go:183 go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send /builds/o11y-gdi/splunk-otel-collector-releaser/.go/pkg/mod/go.opentelemetry.io/collector@v0.53.0/exporter/exporterhelper/metrics.go:132 go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1 /builds/o11y-gdi/splunk-otel-collector-releaser/.go/pkg/mod/go.opentelemetry.io/collector@v0.53.0/exporter/exporterhelper/queued_retry_inmemory.go:119 go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume /builds/o11y-gdi/splunk-otel-collector-releaser/.go/pkg/mod/go.opentelemetry.io/collector@v0.53.0/exporter/exporterhelper/internal/bounded_memory_queue.go:82 go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2 /builds/o11y-gdi/splunk-otel-collector-releaser/.go/pkg/mod/go.opentelemetry.io/collector@v0.53.0/exporter/exporterhelper/internal/bounded_memory_queue.go:69  
I did look into the bin command a bit further and It did help, thanks again! Needed to timechart my data for the latest data of that day as it kept growing and data points were just snapshots of the ... See more...
I did look into the bin command a bit further and It did help, thanks again! Needed to timechart my data for the latest data of that day as it kept growing and data points were just snapshots of the storage of that day. Final code: | bin _time span=12h | stats latest(<storage size>) as <storage size> by _time data_storage | timechart span=12h sum(<storage size>) by data_storage   For the requirement I needed, I just needed to do bin = 1d and span=1d to get daily data trend for the past year of data.  
Hi,  We need to forward XML documents from a UF to indexers that have key fields both in a one-time header  section and in a repeated section that can be repeated up to 100,000 times.  So, for examp... See more...
Hi,  We need to forward XML documents from a UF to indexers that have key fields both in a one-time header  section and in a repeated section that can be repeated up to 100,000 times.  So, for example, the file could look like: <PUBS> <HEADER><Identifier>93234</Identifier> <REPEATSECTION><Balance>8751.23</Balance></REPEATSECTION> <REPEATSECTION><Balance>943.43</Balance></REPEATSECTION> ... note: repeats up to 100,000 times with many many more fields than shown here. Total file size >=300mb... <REPEATSECTION><Balance>123.233</Balance></REPEATSECTION> </PUBS> If the UF breaks events before  <REAPEATSECTION>, then we could have one splunk event per REPEAT section but the fields in the HEADER would not be available.  If the UF sends the whole 300mb file to an indexer,  is there a configuration of props/transforms on the indexer that can create one splunk event per REPEATSECTION but also get the fields from the HEADER section? I'm trying to ask a good question here as best i can.  Does my question make sense to anyone? Thanks!
Thanks!!! @ITWhisperer 
And this is a question about which functionality of which addon/product? All we know is that somewhere (where?) you have some data that is different that you'd expect. We have no clue if this is raw... See more...
And this is a question about which functionality of which addon/product? All we know is that somewhere (where?) you have some data that is different that you'd expect. We have no clue if this is raw data (if so - where it comes from?) or processed (how?).