All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It was so simple, Thank you so much it worked
Which index and sourcetype have the events been ingested into?
In a general case, both index=whatever | stats count and | tstats count where index=whatever run over a static period of time in the past should give you the same result. If there is a differenc... See more...
In a general case, both index=whatever | stats count and | tstats count where index=whatever run over a static period of time in the past should give you the same result. If there is a difference it might mean that you're still ingesting data into that period of time so subsequent runs of either of those commands will yield different results. But if you have a repeatable two different static values of those searches it might signal bucket corruption.
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,... See more...
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,134) 2. | tstats count where index=some_specific_index (Returns 7,593,248)   I do have the same date and time range sent when I run the query. I understand why tstats and stats have different values.      
Hello @gcusello , Thank you so much for your response truly appreciate it. It's a pretty interesting issue. When I use [without condition] : | inputlookup account_audit.csv | eval updatedate=str... See more...
Hello @gcusello , Thank you so much for your response truly appreciate it. It's a pretty interesting issue. When I use [without condition] : | inputlookup account_audit.csv | eval updatedate=strptime(UPDATE_DATE, "%m/%d/%y" %H:%M:%S), comparetdate=now()-86400*30 | table account_id Name Org_Code UPDATE_DATE getting (7 independent events) 121 test Y 01/24/2024 04:52:10 121 test Y 01/24/2024 04:52:12 121 test Y 01/24/2024 04:52:11 123 test2 A 01/30/2024 12:50:11 123 test2 A 01/30/2024 12:50:20 126 test3 B 02/01/2024 11:12:23 126 test3 B 02/01/2024 11:12:21 But when I use (with condition) | inputlookup account_audit.csv | eval updatedate=strptime(UPDATE_DATE, "%m/%d/%y" %H:%M:%S), comparetdate=now()-86400*30 | search updatedate>comparedate Name=* | table account_id Name Org_Code UPDATE_DATE getting (3 events and each event separated by dotted line for clear understanding) 121 test Y 01/24/2024 04:52:10 121 121 --------------------- 123 test2 A 01/30/2024 12:50:10 123 ------------ 126 test3 B 02/01/2024 11:12:02 126  
Thank you for posting this.  I don't understand why so many useful API calls like this aren't included in the documentation.
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, A... See more...
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, Abhigyan.
Thanks Esky!
Hi I checked the inputs all configurations seems right
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin menco... See more...
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin mencoba games online terbaik di tahun 2024 anda bisa mencobanya sekarang hanya di situs hitslot dengan design dan serta event terbaik
PickleRick, I found the answer to my question.   The answer : If when searching an index for data you come across an event that appears to be raw text... Note the source type and verify it is crea... See more...
PickleRick, I found the answer to my question.   The answer : If when searching an index for data you come across an event that appears to be raw text... Note the source type and verify it is created (settings > source types >( search the specific index and create it if it does not exist) (in Search and reporting) click the drop down associated with the event that contains "raw" text. click the drop down titled "Event Actions." select extract fields. observe the log and select the appropriate delimiter. name your fields. assign appropriate permissions. and enjoy.   To answer your questions I chose not to use Splunk base apps or addons for this particular task because these particular apps and addons are not intuitive to configure and hard to find usable documentation for.  I do not know how to identify if I am using fast mode search or not.   Thanks for the help    
I was able to fix my issue with symbolic links, thanks to the following topic. https://community.splunk.com/t5/Deployment-Architecture/How-to-move-index-from-one-hard-drive-to-another-in-Splunk/m-p/... See more...
I was able to fix my issue with symbolic links, thanks to the following topic. https://community.splunk.com/t5/Deployment-Architecture/How-to-move-index-from-one-hard-drive-to-another-in-Splunk/m-p/170733 Here is the steps I did: I created two directories on each volume, like this mkdir /Splunk-Storage/HOT/HOT1 mkdir /Splunk-Storage/HOT/HOT2 mkdir /Splunk-Storage/COLD/COLD1 mkdir /Splunk-Storage/COLD/COLD2  I stopped Splunk on one Indexer. Then moved the indexes to the appropriate directories as desired mv /Splunk-Storage/HOT/testindex1 /Splunk-Storage/HOT/HOT1/testindex1 mv /Splunk-Storage/COLD/testindex1 /Splunk-Storage/COLD/COLD1/testindex1 mv /Splunk-Storage/HOT/testindex2 /Splunk-Storage/HOT/HOT2/testindex2 mv /Splunk-Storage/COLD/testindex2 /Splunk-Storage/COLD/COLD2/testindex2 It took no time of course. Then I created symbolic links just like this ln -s /Splunk-Storage/HOT/HOT1/testindex1 /Splunk-Storage/HOT/testindex1 ln -s /Splunk-Storage/COLD/COLD1/testindex1 /Splunk-Storage/COLD/testindex1 ln -s /Splunk-Storage/HOT/HOT2/testindex2 /Splunk-Storage/HOT/testindex2 ln -s /Splunk-Storage/COLD/COLD2/testindex2 /Splunk-Storage/COLD/testindex2 Then I started Splunk. At this point, Splunk remained unaware of the changes occurring on the underlying file system, yet it continued to function, with the actual data now residing in the correct path. After repeating this process on all indexers, I proceeded to modify the indexes.conf on CM and pushed the changes. After checking that everything is correct, I removed the soft links.
Thanks @ITWhisperer 
Use max_match | rex max_match=0 field=result "\w:\\\\\w+\\\\\w+\\\\(?<myfield>[^\.]+)" Note that this will give you a multivalue field
Hi @gcusello @ITWhisperer  | rex field=result "\w:\\\\\w+\\\\\w+\\\\(?<myfield>[^\.]+)" The above query extracts, but if i want to extract APP and WEB which is in same line how to extract ? Fi a... See more...
Hi @gcusello @ITWhisperer  | rex field=result "\w:\\\\\w+\\\\\w+\\\\(?<myfield>[^\.]+)" The above query extracts, but if i want to extract APP and WEB which is in same line how to extract ? Fi a:\abc\def\MNOPQ.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\APP.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\WEB.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\SQL.txt content is expected to include "A H Dis Query,0,0"
<input type="dropdown" token="application" searchWhenChanged="false"> <label>Application</label> <choice value="*">All</choice> <fieldForLabel>application_Succ</fieldForLabel> <fieldForValue>applicat... See more...
<input type="dropdown" token="application" searchWhenChanged="false"> <label>Application</label> <choice value="*">All</choice> <fieldForLabel>application_Succ</fieldForLabel> <fieldForValue>application_Fail</fieldForValue> <search> <query> |inputlookup application_lists.csv |search country=$country$ |sort country application_Succ |fields application_Succ application_Fail</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input>
Hi @SplunkDash, let me understand: do you have in each event the four fields : account_id, Name, Org_Code, UPDATE_DATE? because it shouldn't be possible that some fields aren't visualizes unless th... See more...
Hi @SplunkDash, let me understand: do you have in each event the four fields : account_id, Name, Org_Code, UPDATE_DATE? because it shouldn't be possible that some fields aren't visualizes unless they are missing in the lookup. In this case, to assign the values where missing you coud use the join (even if I hate this command!), is this your requirement, put in the empty cells the values from other rows? Ciao. Giuseppe
Hi @Muthu_Vinith , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Excellent question. > or is indexing left up to the remote site indexer? Left up to remote site indexer. With RF 3, remote is indexing twice compared to source. Overall each node is indexing additi... See more...
Excellent question. > or is indexing left up to the remote site indexer? Left up to remote site indexer. With RF 3, remote is indexing twice compared to source. Overall each node is indexing additional 2 replicated slices. >Also, does Splunk replicate raw data or compressed data? If ssl is enabled and on indexers, following config in server.conf are set to true. Under stanza [replication_port-ssl://<port>] useSSLCompression = <boolean> * If true, enables SSL compression. * Default: false compressed = <boolean> * DEPRECATED; use 'useSSLCompression' instead. * Used only if 'useSSLCompression' is not set.  Under stanza [sslConfig] allowSslCompression = <boolean> * If set to "true", the server allows clients to negotiate SSL-layer data compression. * KV Store also observes this setting. * If set to "false", KV Store disables TLS compression. * Default: true  
CHECK_METHOD = modtime is not working as expected due to a regression in 9.x as there is wrong calculation which will lead to un-expected re-reading of a file. Until next patch, use following workar... See more...
CHECK_METHOD = modtime is not working as expected due to a regression in 9.x as there is wrong calculation which will lead to un-expected re-reading of a file. Until next patch, use following workaround for inputs with CHECK_METHOD = modtime In inputs.conf set following for impacted stanza time_before_close=0