All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B... See more...
We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B to take care of HEC Services and so on. What would be the easiest approach to achieve this?  Seems like a lot of work. Would it be recommended to do so in the first place? What are the criteria we should have in mind while doing this migration.
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest... See more...
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest_eval on the ip address and use a lookup to change the host If i run the cmd from search i get the required result: index=... | eval host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") this replaces host with "host_value" I have this working on an AIO instance with the following config below: Now adding to HF tier : /opt/splunk/etc/apps/myapp/lookups/lookup.csv lookup has global access and export = system host,host_value 1.2.3.4, myhostname props.conf: [mysourcetype] TRANSFORMS-host_override = host_override transforms.conf: [host_override] INGEST_EVAL =host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") When applied on the HF (restarted)  i see some of the hostnames are changed to "localhost" the others remain unchanged (but this is due to the config not working OR the data coming from another HF with the test config not applied Any ideas - thx
Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or som... See more...
Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or something similar? I already know that this one exists https://www.splunk.com/page/release_rss But it is not up to date. Thanks, Zarge
| bin _time span=1d | stats sum(SuccessCount) as SuccessCount sum(FailedCount) as FailedCount by _time
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval Succ... See more...
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval SuccessCount=if(Total_Status="0", count, Success), FailedCount=if(Total_Status!="0", count, Failed) OUTPUT: Total_Status _time count FailedCount SuccessCount 0 2022-01-12 13:30 100   100 0 2022-01-12 13:00 200   200 0 2022-01-13 11:30 110   110 500 2022-01-13 11:00 2 2   500 2022-01-11 10:30 4 4   500 2022-01-11 10:00 8 8     But i want the output as shown below table: _time SuccessCount FailedCount 2022-01-13 110 2 2022-01-12 300 0 2022-01-11 0 12
It was so simple, Thank you so much it worked
Which index and sourcetype have the events been ingested into?
In a general case, both index=whatever | stats count and | tstats count where index=whatever run over a static period of time in the past should give you the same result. If there is a differenc... See more...
In a general case, both index=whatever | stats count and | tstats count where index=whatever run over a static period of time in the past should give you the same result. If there is a difference it might mean that you're still ingesting data into that period of time so subsequent runs of either of those commands will yield different results. But if you have a repeatable two different static values of those searches it might signal bucket corruption.
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,... See more...
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,134) 2. | tstats count where index=some_specific_index (Returns 7,593,248)   I do have the same date and time range sent when I run the query. I understand why tstats and stats have different values.      
Hello @gcusello , Thank you so much for your response truly appreciate it. It's a pretty interesting issue. When I use [without condition] : | inputlookup account_audit.csv | eval updatedate=str... See more...
Hello @gcusello , Thank you so much for your response truly appreciate it. It's a pretty interesting issue. When I use [without condition] : | inputlookup account_audit.csv | eval updatedate=strptime(UPDATE_DATE, "%m/%d/%y" %H:%M:%S), comparetdate=now()-86400*30 | table account_id Name Org_Code UPDATE_DATE getting (7 independent events) 121 test Y 01/24/2024 04:52:10 121 test Y 01/24/2024 04:52:12 121 test Y 01/24/2024 04:52:11 123 test2 A 01/30/2024 12:50:11 123 test2 A 01/30/2024 12:50:20 126 test3 B 02/01/2024 11:12:23 126 test3 B 02/01/2024 11:12:21 But when I use (with condition) | inputlookup account_audit.csv | eval updatedate=strptime(UPDATE_DATE, "%m/%d/%y" %H:%M:%S), comparetdate=now()-86400*30 | search updatedate>comparedate Name=* | table account_id Name Org_Code UPDATE_DATE getting (3 events and each event separated by dotted line for clear understanding) 121 test Y 01/24/2024 04:52:10 121 121 --------------------- 123 test2 A 01/30/2024 12:50:10 123 ------------ 126 test3 B 02/01/2024 11:12:02 126  
Thank you for posting this.  I don't understand why so many useful API calls like this aren't included in the documentation.
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, A... See more...
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, Abhigyan.
Thanks Esky!
Hi I checked the inputs all configurations seems right
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin menco... See more...
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin mencoba games online terbaik di tahun 2024 anda bisa mencobanya sekarang hanya di situs hitslot dengan design dan serta event terbaik
PickleRick, I found the answer to my question.   The answer : If when searching an index for data you come across an event that appears to be raw text... Note the source type and verify it is crea... See more...
PickleRick, I found the answer to my question.   The answer : If when searching an index for data you come across an event that appears to be raw text... Note the source type and verify it is created (settings > source types >( search the specific index and create it if it does not exist) (in Search and reporting) click the drop down associated with the event that contains "raw" text. click the drop down titled "Event Actions." select extract fields. observe the log and select the appropriate delimiter. name your fields. assign appropriate permissions. and enjoy.   To answer your questions I chose not to use Splunk base apps or addons for this particular task because these particular apps and addons are not intuitive to configure and hard to find usable documentation for.  I do not know how to identify if I am using fast mode search or not.   Thanks for the help    
I was able to fix my issue with symbolic links, thanks to the following topic. https://community.splunk.com/t5/Deployment-Architecture/How-to-move-index-from-one-hard-drive-to-another-in-Splunk/m-p/... See more...
I was able to fix my issue with symbolic links, thanks to the following topic. https://community.splunk.com/t5/Deployment-Architecture/How-to-move-index-from-one-hard-drive-to-another-in-Splunk/m-p/170733 Here is the steps I did: I created two directories on each volume, like this mkdir /Splunk-Storage/HOT/HOT1 mkdir /Splunk-Storage/HOT/HOT2 mkdir /Splunk-Storage/COLD/COLD1 mkdir /Splunk-Storage/COLD/COLD2  I stopped Splunk on one Indexer. Then moved the indexes to the appropriate directories as desired mv /Splunk-Storage/HOT/testindex1 /Splunk-Storage/HOT/HOT1/testindex1 mv /Splunk-Storage/COLD/testindex1 /Splunk-Storage/COLD/COLD1/testindex1 mv /Splunk-Storage/HOT/testindex2 /Splunk-Storage/HOT/HOT2/testindex2 mv /Splunk-Storage/COLD/testindex2 /Splunk-Storage/COLD/COLD2/testindex2 It took no time of course. Then I created symbolic links just like this ln -s /Splunk-Storage/HOT/HOT1/testindex1 /Splunk-Storage/HOT/testindex1 ln -s /Splunk-Storage/COLD/COLD1/testindex1 /Splunk-Storage/COLD/testindex1 ln -s /Splunk-Storage/HOT/HOT2/testindex2 /Splunk-Storage/HOT/testindex2 ln -s /Splunk-Storage/COLD/COLD2/testindex2 /Splunk-Storage/COLD/testindex2 Then I started Splunk. At this point, Splunk remained unaware of the changes occurring on the underlying file system, yet it continued to function, with the actual data now residing in the correct path. After repeating this process on all indexers, I proceeded to modify the indexes.conf on CM and pushed the changes. After checking that everything is correct, I removed the soft links.
Thanks @ITWhisperer 
Use max_match | rex max_match=0 field=result "\w:\\\\\w+\\\\\w+\\\\(?<myfield>[^\.]+)" Note that this will give you a multivalue field
Hi @gcusello @ITWhisperer  | rex field=result "\w:\\\\\w+\\\\\w+\\\\(?<myfield>[^\.]+)" The above query extracts, but if i want to extract APP and WEB which is in same line how to extract ? Fi a... See more...
Hi @gcusello @ITWhisperer  | rex field=result "\w:\\\\\w+\\\\\w+\\\\(?<myfield>[^\.]+)" The above query extracts, but if i want to extract APP and WEB which is in same line how to extract ? Fi a:\abc\def\MNOPQ.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\APP.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\WEB.txt content is expected to include "A H Dis Query,0,0" Fi a:\abc\def\SQL.txt content is expected to include "A H Dis Query,0,0"