All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warn... See more...
Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warning msg - "Error in 'lookup' command: Could not construct lookup 'test_lenlookup, data'. See search.log for more details." (on older splunk versions i remember this search.log, but nowadays both search.log and searches.log are not available)   https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/WhatSplunklogsaboutitself as per what Splunk logs about itself, it should be "See searches.log for more details." one more bigger issue -both search.log or searches.log are not available. All these searches are not returning anything (the doc says that - The Splunk search logs are located in sub-folders under $SPLUNK_HOME/var/run/splunk/dispatch/. )       index=_* source="*search.log" OR index=_* source="*searches.log" OR index=_* source="C:\Program Files\Splunk\var\run\splunk\dispatch*"         will post this to Splunk Slack as well, thanks.  If any post helped you in anyway, pls consider adding a karma point, thanks. 
Hi @arunkuriakose  >>> i am trying to visualise this in such a way that i have a live dashboard which shows me which users are passing through which gate  "visualizing" this thru a "live dashboard"... See more...
Hi @arunkuriakose  >>> i am trying to visualise this in such a way that i have a live dashboard which shows me which users are passing through which gate  "visualizing" this thru a "live dashboard".... i understand this requirement but it may be difficult to implement.  maybe, reconsider like this: 1. have a basic dashboard with two panels 2. one panel, simple in design, for the gate1, this panel will show you which emp_id crosses this gate1 at what time time   emp_id 9am    1234 9:05am 2383 3. and have another panel for gate2 in the same design logic.  4. auto-refresh this dashboard for every, say 30 sec (increase or decrease this depending on ur requirement) 5. you can have more panels, like missing person (emp_id who entered thru gate1 but not existed thru gate2, etc)   Pls add karma / upvote to any post which helped you in any way, thanks. 
I have saved the report using no time range. The report works getting results for  the last 60 minutes as expected. My issue is when I query the testReport  I want to query with different earliest a... See more...
I have saved the report using no time range. The report works getting results for  the last 60 minutes as expected. My issue is when I query the testReport  I want to query with different earliest and latest times, so I can have two time ranges in the same chart. Something like: | savedsearch "testReport" earliest="12/08/2024:00:00:00" latest="12/08/2024:23:59:00" | table id, response_time| eval lineSource = "first_day" | append [| savedsearch "testReport" earliest="12/09/2024:00:00:00" latest="12/09/2024:23:59:00" | table id, response_time| eval lineSource = "second_day"]
Hi @Alex_LC, You can try below; props.conf [my_sourcetype] LINE_BREAKER = (\[1\]\[DATA\]BEGIN[-\s]+) SHOULD_LINEMERGE = false TRANSFORM-transform2xml = transform2xml KV_MODE = xml transform.con... See more...
Hi @Alex_LC, You can try below; props.conf [my_sourcetype] LINE_BREAKER = (\[1\]\[DATA\]BEGIN[-\s]+) SHOULD_LINEMERGE = false TRANSFORM-transform2xml = transform2xml KV_MODE = xml transform.conf [transform2xml] REGEX = ([^\[]+)(\[\d+\][\r\n]+<xml>)([^\[]+)(<\/xml>[^$]+) FORMAT = <xml><time>$1</time>$3</xml> DEST_KEY = _raw  It should create a separate event for each block with time field like below; <xml><time>08:03:09</time> <tag1>some more data</tag1> <nestedTag> <tag2>fooband a bit more</tag2> </nestedTag> </xml>  
Per the Search Reference Manual, If you specify All Time in the time range picker, the savedsearch command uses the time range that was saved with the saved search. If you specify any other tim... See more...
Per the Search Reference Manual, If you specify All Time in the time range picker, the savedsearch command uses the time range that was saved with the saved search. If you specify any other time in the time range picker, the time range that you specify overrides the time range that was saved with the saved search.
For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates... See more...
For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates.  For starters I need to be able to run the above with a time I specify overrriding the time range above. | savedsearch "testReport" earliest="12/08/2024:00:00:00" latest="12/08/2024:23:59:00" I have seen a few similar question here but I don't think it has  a working solution.     
https://docs.splunk.com/Documentation/Splunk/9.4.0/ReleaseNotes/MeetSplunk#What.27s_New_in_9.4 Why New Splunk TcpOutput Persistent Queue? Scheduled no connectivity for extended period but ne... See more...
https://docs.splunk.com/Documentation/Splunk/9.4.0/ReleaseNotes/MeetSplunk#What.27s_New_in_9.4 Why New Splunk TcpOutput Persistent Queue? Scheduled no connectivity for extended period but need to resume data transmission once connection is back up. Assuming there is enough storage, tcpout output queue can persist all events to disk instead of buying expensive third party subscription(unsupported) to persist  data to SQS/S3. If there are two tcpout output destinations and one is down for extended period. Down destination has large enough PQ to persist data, then second destination is  not blocked. Second destination will block only once PQ of down destination is full.  Don't have to  pay for  third party SQS & S3 puts. Third party/ external S3 persistent queue introduces permanent additional latency( due to detour to external SQS/S3 queue). There are chances of loss of events( events getting in to SQS/S3 DLQ). Third party/ external SQS/S3 persistent queuing requires batching events, which adds additional latency in order to reduce SQS/S3 puts cost. Unwanted additional network bandwidth usage incurred due to uploading all data to SQS/S3 and then downloading . Third party imposes upload payload size limits. Monitored corporate laptops are off network, not connected  to internet or not connected to VPN for extended period of time. Later laptops might get switched off but events should be persisted and forwarded as and when laptop connects to network. Sensitive data should stay/persisted within network. On demand persistent queuing on forwarding tier when Indexer Clustering is down. On demand persistent queuing on forwarding tier when Indexer Clustering indexing is slow due to high system load. On demand persistent queuing on forwarding tier when Indexer Clustering is in rolling restart. On demand persistent queuing on forwarding tier during Indexer Clustering upgrade. Don't have to use decade old S2S protocol version as suggested by some third party vendors ( you all know enableOldS2SProtocol=true in outputs.conf) How to enable? Just set  persistentQueueSize as per outputs.conf [tcpout:splunk-group1] persistentQueueSize=1TB [tcpout:splunk-group2] persistentQueueSize=2TB Note: Sizing guide coming soon.
Thanks for your detailed response.   @genesiusj wrote: Q - "What size is your lookup - you may well be hitting the default limits defined (25MB)" A - csv: 1 million records - 448,500 bytes // ... See more...
Thanks for your detailed response.   @genesiusj wrote: Q - "What size is your lookup - you may well be hitting the default limits defined (25MB)" A - csv: 1 million records - 448,500 bytes // kvstore: 3 million records - 2,743.66 MB That seems wrong - 1,000,000 records must be more than 448,500 bytes - there has to be at least a line feed between rows which would give you 1,000,000 bytes. Anyway, if the CSV is the origin dataset, then I don't think the lookup limit is going to be relevant, but are you doing something like   | inputlookup csv | eval mod_addr=process_address... | lookup my_kvstore addr as mod_addr output owner   The fact that this is all happening on the search head means that the SH will probably be screaming - what is the size of the SH and have you checked its performance profile during the search? Q - "What are you currently doing to be 'fuzzy' so your matches currently work or are you really looking for exact matches somewhere in your data?" A - I stripped off any non-numeric characters at the beginning of the address on the lookup and use that field for the as in my lookup command with my kvstore   | lookup my_kvstore addr as mod_addr output owner   I have in the past done something similar using wine titles, so I have "normalised" the wine title by removing all stop words, all words <= 3 characters, all numbers. I then split to a MV field, convert to lower case then sort and join. I have done this in the base dataset (i.e. your KV store) and also in all wines I see. It is reasonably reliable. However, that doesn't really solve your issue with the volume... Q - Also, if you are just looking at some exact match somewhere, then the KV store may benefit from using accelerated fields - that can speed up lookups against the KV store (if that's the way you're doing it) significantly. A - Using the above code, the addr would be the accelerated field, correct? Yes, I have seen very good performance improvements with large data sets using accelerated_fields, so do this first. If you have the option to boost the SH specs, that may benefit, but first check what sort of bottleneck you have on the SH.  
I too don't quite get your statement "where value is max"  - you said you have    title1 title2 title3 title4 value   so I assumed titles are text elements and the value is numeric. Does the tabl... See more...
I too don't quite get your statement "where value is max"  - you said you have    title1 title2 title3 title4 value   so I assumed titles are text elements and the value is numeric. Does the table below model your data or is it different? title1 title4 value   TitleC Title4-X 16   TitleA Title4-X 69   TitleA Title4-X 83   TitleC Title4-X 92   TitleB Title4-X 45   TitleA Title4-Y 90   TitleA Title4-Y 87   TitleB Title4-Y 97   TitleB Title4-Y 7   TitleB Title4-Y 54   TitleB Title4-Y 85   TitleC Title4-Y 58   TitleC Title4-Y 18   TitleA Title4-Z 10   TitleC Title4-Z 31   TitleA Title4-Z 38   TitleA Title4-Z 46   TitleB Title4-Z 57   TitleA Title4-Z 27   TitleB Title4-Z 71   What does max in your description represent?  I understood you want all the values of title4 "where value is max". Can you define what max is. For title4-X, Y and Z the max of values by title 4 are 92, 97 and 71. For title1-A, B and C the max of values by title1 are 90, 97 and 92. Do either of these describe your 'max'. An example would be useful?
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. The data comes to my indexer ... See more...
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. The data comes to my indexer as expected. One of the fields that I send with the logs is the TRANSPORT field. When I search the logs I can see that TRANSPORT event metadata is present as expected.   I would like to set the logs sourcetype dynamically based on the value of the TRANSPORT field. Here is the props.conf and transforms.conf that I'm trying to use   props.conf: [default] TRANSFORMS-change_sourcetype = set_new_sourcetype   transforms.conf [set_new_sourcetype] REGEX = TRANSPORT=([^\s]+) FORMAT = sourcetype::test DEST_KEY = MetaData:Sourcetype   Unfortunately the above seems to have no impact on the logs. I think that the problem lies in the REGEX field. When I change it to REGEX = .* , all of the events have the sourcetype set to test as expected. Why can't I use the TRANSPORT event in the REGEX?
I pulled this from one of the public examples for manual instrumentation and it worked. I think the only line I modified was the "require" line. One more note, I used .htaccess to expose env variable... See more...
I pulled this from one of the public examples for manual instrumentation and it worked. I think the only line I modified was the "require" line. One more note, I used .htaccess to expose env variables. require __DIR__ . '/vendor/autoload.php';   <?php declare(strict_types=1); namespace OpenTelemetry\Example; require __DIR__ . '/vendor/autoload.php'; use OpenTelemetry\Contrib\Otlp\OtlpHttpTransportFactory; use OpenTelemetry\Contrib\Otlp\SpanExporter; use OpenTelemetry\SDK\Trace\SpanProcessor\SimpleSpanProcessor; use OpenTelemetry\SDK\Trace\TracerProvider; $transport = (new OtlpHttpTransportFactory())->create('http://localhost:4318/v1/traces', 'application/x-protobuf'); $exporter = new SpanExporter($transport); echo 'Starting OTLP example'; $tracerProvider = new TracerProvider( new SimpleSpanProcessor( $exporter ) ); $tracer = $tracerProvider->getTracer('io.opentelemetry.contrib.php'); $root = $span = $tracer->spanBuilder('root')->startSpan(); $scope = $span->activate(); for ($i = 0; $i < 3; $i++) { // start a span, register some events $span = $tracer->spanBuilder('loop-' . $i)->startSpan(); $span->setAttribute('remote_ip', '1.2.3.4') ->setAttribute('country', 'USA'); $span->addEvent('found_login' . $i, [ 'id' => $i, 'username' => 'otuser' . $i, ]); $span->addEvent('generated_session', [ 'id' => md5((string) microtime(true)), ]); $span->end(); } $root->end(); $scope->detach(); echo PHP_EOL . 'OTLP example complete! '; echo PHP_EOL; $tracerProvider->shutdown(); ?>    
Filenames match exactly. Targetlocation has the file name. Like I have it in my example the file names are different. It is not related to position. When  modify the stats to  values(file_name) i ge... See more...
Filenames match exactly. Targetlocation has the file name. Like I have it in my example the file names are different. It is not related to position. When  modify the stats to  values(file_name) i get results but it is so weird results  
I think I've read about similar problems back then but I don't recall details, sadly.
Ok. Honestly, I'm a bit confused. I don't understand what you mean by "where value is max". As I understand it if you have title1 title4 1 3 2 5 3 7 1 2 2 3 3 5 1 1 ... See more...
Ok. Honestly, I'm a bit confused. I don't understand what you mean by "where value is max". As I understand it if you have title1 title4 1 3 2 5 3 7 1 2 2 3 3 5 1 1 You want title1 title4 1 3 2 5 3 7 as a result because for each value of title1 you want the max value of title4, no? Maybe we just misunderstand each other...
Sounds like the file names don't completely match or perhaps the TargetLocation event doesn't have it in? Is it always the same file or at least file position e.g. always the last in the list? Or pos... See more...
Sounds like the file names don't completely match or perhaps the TargetLocation event doesn't have it in? Is it always the same file or at least file position e.g. always the last in the list? Or possibly files after a particular point in the XML message. Without being able to see your data, it is a bit difficult to determine what might be wrong.
Modify cybereason rest client python and set Verify to False
Hello, in case you can't use SSL certificate, you may modify cybereason python script.
Still i get "Pending" for all the files even though it was success and timestamp is there.
thanks @bowesmana @sainag_splunk , I tried both and results were near same! Sinece the CN field is already extracted I modified the search like this.... base search .... | rex field=cn "(?... See more...
thanks @bowesmana @sainag_splunk , I tried both and results were near same! Sinece the CN field is already extracted I modified the search like this.... base search .... | rex field=cn "(?<ipAddr>\d{1,3}[._]\d{1,3}[._]\d{1,3}[._]\d{1,3})" | eval cn = coalesce(replace(ipAddr, "_", "."), cn) In case anyone runs into this thread later.  Much appreciated!