All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have the below types of logs in in two different hosts in my index: HOST= abc log1: Tue Feb 2 19:07:26 EST 2021 Host Id :19804 Host Name : abcd Host Status : Running App Id :3403927 Labe... See more...
Hi All, I have the below types of logs in in two different hosts in my index: HOST= abc log1: Tue Feb 2 19:07:26 EST 2021 Host Id :19804 Host Name : abcd Host Status : Running App Id :3403927 Label Name : com.abc.mx.xyz Synchronization : In Sync State : Running Number of template version : 48 log2: Tue Feb 2 19:07:26 EST 2021 Host Id :19804 Host Name : wxyz Host Status : Running App Id :27736 Label Name : com.abcde.abcdefgh Synchronization : Out of Sync State : Running Number of template version : 1 HOST= xyz log1: 2021-02-03 02:12:49.896, APP_NAME="com.abc.mx.xyz", APP_TEMP_NAME="com.abc.mx.xyz-1", APP_TEMP_VER="1.1.5", LASTDEPLOYED="2019-09-24 13:38:05.047", ENV_NAME="ABCEnvironment_MY" log2: 2021-02-03 02:12:49.896, APP_NAME="com.abcde.abcdefgh", APP_TEMP_NAME="com.abcde.abcdefgh", APP_TEMP_VER="3.1.0.20201126030342320", LASTDEPLOYED="2020-11-27 13:01:49.959", ENV_NAME="ABCEnvironment_AU" Here want to create a table as below with fields from both the hosts: App_Name                          Sync_State          Last_Deployed                             Temp_Version com.abc.mx.xyz                 In Sync                   2019-09-24 13:38:05.047        1.1.5 com.abcde.abcdefgh       Out of Sync          2020-11-27 13:01:49.959         3.1.0.20201126030342320 and so on.. Using the below query I am able to get the table: index=main host IN(abc,xyz) | rex field=_raw "(?ms)Host\s+Name\s:\s(?<Host_Name>\w+)" | rex field=_raw "(?ms)Label\s+Name\s:\s(?<App_Name>\w+\S+)" | rex field=_raw "(?ms)APP_NAME=\"(?P<App_Name>[^\"]+)" | rex field=_raw "(?ms)Synchronization\s:\s(?<Sync_State>[\w\s]+Sync)\sState" | rex field=_raw "(?ms)LASTDEPLOYED=\"(?P<Last_Deployed>[^\"]+)" | rex field=_raw "(?ms)APP_TEMP_VER=\"(?P<Temp_Version>[^\"]+)" | stats values(Sync_State) as Sync_State latest(Last_Deployed) as Last_Deployed latest(Temp_Version) as Temp_Version by App_Name | mvexpand Sync_State | table App_Name,Sync_State,Last_Deployed,Temp_Version   However when I try to minimize the search to a particular environment using the below query the "Sync_State" field goes blank. index=main host IN(abc,xyz) | rex field=_raw "(?ms)Host\s+Name\s:\s(?<Host_Name>\w+)" | rex field=_raw "(?ms)Label\s+Name\s:\s(?<App_Name>\w+\S+)" | rex field=_raw "(?ms)APP_NAME=\"(?P<App_Name>[^\"]+)" | rex field=_raw "(?ms)Synchronization\s:\s(?<Sync_State>[\w\s]+Sync)\sState" | rex field=_raw "(?ms)LASTDEPLOYED=\"(?P<Last_Deployed>[^\"]+)" | rex field=_raw "(?ms)APP_TEMP_VER=\"(?P<Temp_Version>[^\"]+)" | rex field=_raw "(?ms)ENV_NAME=\"(?P<ENV_NAME>[^\"]+)" | search ENV_NAME=BPMEnvironment_SG | stats values(Sync_State) as Sync_State latest(Last_Deployed) as Last_Deployed latest(Temp_Version) as Temp_Version by App_Name | mvexpand Sync_State | table App_Name,Sync_State,Last_Deployed,Temp_Version Can someone please help me edit the query to fulfill my expectation..
https://docs.splunk.com/Documentation/Splunk/8.1.2/Search/Parsingsearches but ⌘+F does not work because it is the same as the browser shortcut key. Is there anything I can do?
Hello I would like to trigger an alert if value "pending" is above a value ( > 500) for a period of time (> 5 min). If during those 5 minutes, the value go under 500 I don't want the alert to be tr... See more...
Hello I would like to trigger an alert if value "pending" is above a value ( > 500) for a period of time (> 5 min). If during those 5 minutes, the value go under 500 I don't want the alert to be triggered.   Here is the alert if above 500 index=my_index sourcetype=source:type "request(s) still pending" | rex field=_raw "(?ms)^(?:[^ \\n]* ){7}(?P<pending>\\d+)" | where pending > 500   How can I add the alert only if value is above 500 for more than 5minutes?  
Hello Splunkers !   i have a problem here, that we're running an infra structure change and for that im getting duplicated logs im running a search that show bytes count for users on proxy, but be... See more...
Hello Splunkers !   i have a problem here, that we're running an infra structure change and for that im getting duplicated logs im running a search that show bytes count for users on proxy, but because of the double logs i get two usernames instead of on so for that the users column is empty   look at the image below please:   we can see the username is duplicated, its the same user, but the old user has "d1$" more than the name. please help me to eliminate the old user name and only fit the one in the search.     ive tried this: user!=*d1$* but the table still misses the users in columns
Hi, i am using this Splunk Network diagram viz app created by @danspav  to show server status but even after meeting criteria in the data, few nodes are not showing any color in the visualization. Th... See more...
Hi, i am using this Splunk Network diagram viz app created by @danspav  to show server status but even after meeting criteria in the data, few nodes are not showing any color in the visualization. The color are set for blue/red but few nodes are showing white. It will be very helpful if someone can elaborate on how this can be fixed of if its a bug. Query: index="index_XX" | rename "XXX" as processName "XXX" as value "XXX" as to "X" as executionId "X" as from | where processName like ("%HNL%") | eval color=case(value=="COMPLETE", "blue",value !="COMPLETE","red") | dedup executionId | table from to value color  
I have 3 data sets that I need to combine with 1 data set not having a field to perform a compare.  I initially started with a join but moved away from that given the limited results returned and als... See more...
I have 3 data sets that I need to combine with 1 data set not having a field to perform a compare.  I initially started with a join but moved away from that given the limited results returned and also to save on processing time of running the query.  My data sets are: Users Index index=users sourcetype=userlist This has the following fields user_id user_title user_name   Workstations Index index=workstations sourcetype=machines This has the following fields pc_id pc_type user_name   I can then do a stats instead of join on this data using user_name as the "join".  So my query is as follows (note a user can have more than 1 PC hence the mvexpand to break into individual entries)   (index=users sourcetype=userlist) OR (index=workstations sourcetype=machines) | stats values(*) as * by user_name | mvexpand pc_id | table pc_id pc_type user_name   I now want to take the information I have from this search with various other filters from another source (for example    index=windows sourcetype=connections which has lots of fields but I am only interested in the following device_name last_activity_time   I want to be able to match the results from my combined search and extend this to check against the 3rd index, matching on device_name against pc_id.  I am thinking that my query might add another statement into the first line of my SPL for example  (i.e. index=users ...) OR (index=workstations .....) OR (index=windows | fields device_name last_activity_time) | stats values(*) as * by user_name   however I don't think this will work as user_name is not a field in the "windows" index.  I am not sure how to now extend this to then look into another index without then doing a join.   
Hello All, Grateful for assistance on this one. We have several areas where servers are HA pairs and write to a server specific log.  However, because they are an HA pair, their own log and the equ... See more...
Hello All, Grateful for assistance on this one. We have several areas where servers are HA pairs and write to a server specific log.  However, because they are an HA pair, their own log and the equivalent log on the paired server is visible via a shared drive. Thus,  server 'A' produces 'serverlogA' but can also see 'serverlogB'.  Server 'B' produces 'serverlogB' and can also see 'serverlogA'. Because both servers are in the same Server Class, we end up with duplicated events from both server logs. We cannot only ingest from one server because they also have unique log files and a failure on the ingesting server would require manual intervention to move to the paired server. I have tried to nullqueue the events as shown below, but not had any success.   Please let me know your thoughts on how to work around this issue. Thanks props.conf [source::/apps/lvservices/mnt/logs/abc/whyluaap182_ContractEnquiry.log] TRANSFORMS-nullq_cms_uaap181 = nullq_uaap181 [source::/apps/lvservices/mnt/logs/abc/whyluaap181_ContractEnquiry.log] TRANSFORMS-nullq_cms_uaap182 = nullq_uaap182 transforms.conf [nullq_uaap181] SOURCE_KEY = MetaData:Host REGEX = whyluaap181 DEST_KEY = queue FORMAT = nullQueue [nullq_uaap182] SOURCE_KEY = MetaData:Host REGEX = whyluaap182 DEST_KEY = queue FORMAT = nullQueue  
Good afternoon, community. There was a need to remove lookup files from replication between Search Heads (version 8.1.2). Tried tweaking the server.conf file and setting the values: conf_replicati... See more...
Good afternoon, community. There was a need to remove lookup files from replication between Search Heads (version 8.1.2). Tried tweaking the server.conf file and setting the values: conf_replication_include.lookups = false conf_replication_summary.blacklist.lookups = (system | (apps / *) | users (/ _ reserved)? / * / *) / lookups / * If the lookup file is created through the UI, it remains local, but unfortunately this does not help when using the outputlookup command and the file is distributed across the cluster. Btool on search head: splunk btool --debug server list | grep lookup Search Head Clustering: Configuration Replication (when using the outputlookup command): Perhaps you have any ideas where else to pay attention to completely close the possibility of replicating lookup files?  
Hello, I am trying to build Splunk Dashboard with my logs in splunk. The rows are like below.   [row 1] {  "id" : 123,  "name" : "A",  "sub_id" : 444,  "count" : 25  }   [row 2] {  "id" ... See more...
Hello, I am trying to build Splunk Dashboard with my logs in splunk. The rows are like below.   [row 1] {  "id" : 123,  "name" : "A",  "sub_id" : 444,  "count" : 25  }   [row 2] {  "id" : 123,  "name" : "A",  "sub_id" : 445,  "count" : 25  }   As you can see, some of my results have the column with id value is same but sub_id is different. I need to sum(count) only once if "id" is different, but the results came out as duplicated(not exactly twice, some results have same id with 3 or 4rows, So dividing by 2 is not a good solution.)   I want to build timechart with above data and made SPL like below. host=[HOST] index=[INDEX] sourcetype=[SOURCETYPE] source=[SOURCE] | bucket span=1d _time | chart limit=0 sum(vm.count) as VM by _time   So, How could I sum count data if the id is same? Thank you,
Hello , we have an application to be monitor through an APM of Splunk, I heard there is a product  of SignalFx where to download the Splunk APM? And how we can install it and conifugre and for the... See more...
Hello , we have an application to be monitor through an APM of Splunk, I heard there is a product  of SignalFx where to download the Splunk APM? And how we can install it and conifugre and for the client system is there any agent to be installed?  
We have the following architecture 1 SearchHead 1 Cluster Master 8 Indexers  1 deployment server   I am now added 2 new indexers , I see its syncing but very slow  [splunk@ilissplidx10 lo... See more...
We have the following architecture 1 SearchHead 1 Cluster Master 8 Indexers  1 deployment server   I am now added 2 new indexers , I see its syncing but very slow  [splunk@ilissplidx10 local]$ cat server.conf [general] parallelIngestionPipelines=2 [queue=typingQueue] maxSize = 20MB [queue=indexQueue] maxSize = 30MB [queue=aggQueue] maxSize = 30MB [queue=parsingQueue] maxSize = 30MB [clustering] cxn_timeout = 600 send_timeout = 600 rcv_timeout = 600 heartbeat_period = 10 [kvstore] disabled = true [splunk@ilissplidx10 local]$ cat limits.conf [default] max_mem_usage_mb = 600 # [search] #dispatch_dir_warning_size = 3500 base_max_searches = 60 # # ERROR: Events may not be returned in sub-second order due to memory pressure. max_rawsize_perchunk = 200000000 # [pdf] max_rows_per_table = 10000 # [scheduler] max_searches_perc = 100 # [join] subsearch_maxout = 500000 # [realtime] indexed_realtime_use_by_default = true [splunk@ilissplidx10 local]$ cat distsearch.conf [distributedSearch] statusTimeout = 20 [splunk@ilissplidx10 local]$
Hi, I have the below types of logs in in two different hosts in my index: HOST= abc log1: Tue Feb 2 19:07:26 EST 2021 Host Id :19804 Host Name : abcd Host Status : Running App Id :3403927 Label Na... See more...
Hi, I have the below types of logs in in two different hosts in my index: HOST= abc log1: Tue Feb 2 19:07:26 EST 2021 Host Id :19804 Host Name : abcd Host Status : Running App Id :3403927 Label Name : com.abc.mx.xyz Synchronization : In Sync State : Running Number of template version : 48 log2: Tue Feb 2 19:07:26 EST 2021 Host Id :19804 Host Name : wxyz Host Status : Running App Id :27736 Label Name : com.abcde.abcdefgh Synchronization : Out of Sync State : Running Number of template version : 1 HOST= xyz log1: 2021-02-03 02:12:49.896, APP_NAME="com.abc.mx.xyz", APP_TEMP_NAME="com.abc.mx.xyz-1", APP_TEMP_VER="1.1.5", LASTDEPLOYED="2019-09-24 13:38:05.047", ENV_NAME="ABCEnvironment_MY" log2: 2021-02-03 02:12:49.896, APP_NAME="com.abcde.abcdefgh", APP_TEMP_NAME="com.abcde.abcdefgh", APP_TEMP_VER="3.1.0.20201126030342320", LASTDEPLOYED="2020-11-27 13:01:49.959", ENV_NAME="ABCEnvironment_AU" Here want to create a table as below with fields from both the hosts: App_Name                          Sync_State          Last_Deployed                             Temp_Version com.abc.mx.xyz                 In Sync                   2019-09-24 13:38:05.047        1.1.5 com.abcde.abcdefgh       Out of Sync          2020-11-27 13:01:49.959         3.1.0.20201126030342320 and so on... I created this query to create the table: index=main host IN(abc,xyz) | rex field=_raw "(?ms)Label\s+Name\s:\s(?<App_Name>\w+\S+)" | rex field=_raw "(?ms)Synchronization\s:\s(?<Sync_State>[\w\s]+Sync)\sState" | rex field=_raw "(?ms)LASTDEPLOYED=\"(?P<Last_Deployed>[^\"]+)" | rex field=_raw "(?ms)APP_TEMP_VER=\"(?P<Temp_Version>[^\"]+)" | stats values(Sync_State) as Sync_State by App_Name | mvexpand Sync_State | table App_Name,Sync_State,Last_Deployed,Temp_Version But I am failing to get the desired output as with this query the "Last_Deployed" and "Temp_Version" fields come empty. Can someone please help me in creating the right query to create the table in desired manner..
Hi Team, We are trying to onboard the data from Azure Workspace, Below are the issues kindly address if u had gone through and tackled successfully. 1.  Upon creating the input, the input executes ... See more...
Hi Team, We are trying to onboard the data from Azure Workspace, Below are the issues kindly address if u had gone through and tackled successfully. 1.  Upon creating the input, the input executes and sends data for a brief moment and then it stops. Since it is conf. on HF, once i renable the already enabled input again it sends some data and then thats it. 2. The latest version of app, doesnt work with JSON and CSV. Anyone tackled this. btw the json works but not usefull   So anyone solved these 2 any alternatives
I am trying to find the time difference between 2 events with different states, in particular when the device turns on or off. However, I only have the field of status which shows that it's on (1) or... See more...
I am trying to find the time difference between 2 events with different states, in particular when the device turns on or off. However, I only have the field of status which shows that it's on (1) or off (0). I made use of the delta function to derive whether the device is turning on (1), turning off (-1) or no change in state (0) as state as follows: | delta status p=1 as switch_state I would like to know the operation hours of the device (time difference between switch_state=-1 and switch_state=1) but am unsure how to do a comparison.  My previous attempt was to use the streamstats function to compute, however I could only compare between same states as follows: | streamstats count(eval(switch_state=-1)  AS startcount by asset | stats range(_time)  AS duration by startcount asset Hoping to try to change the code or use a different method to compare between states -1 and 1 within the same field and then find the time difference between them.
On Splunk 7.3.1.1 and now suddenly out of nowhere this issue popped up, the notable alerts are being duplicated for almost 80% of my co relation searches, when i check the raw events for those duplic... See more...
On Splunk 7.3.1.1 and now suddenly out of nowhere this issue popped up, the notable alerts are being duplicated for almost 80% of my co relation searches, when i check the raw events for those duplicates they're exactly same, so no question of duplicity of actual logs, but only for the notable alerts that are generated in Incident Review Dashboard of Enterprise Security. Does anyone faced this issue or have resolved it? @woodcock 
Hi, We have clustered environment with 4 indexers and 4 Search heads ,1 cluster master,1 deployment server and 2 HF. We are moving our servers into AZURE platform.  What configuration changes do we... See more...
Hi, We have clustered environment with 4 indexers and 4 Search heads ,1 cluster master,1 deployment server and 2 HF. We are moving our servers into AZURE platform.  What configuration changes do we have to make in Splunk after migrating? What happens to indexed data in splunk after migrating to AZURE and how can i retrieve it? I could not find relevant docs on this migration. so please share any links or  docs which explains on what actions needs to be taken after migrating splunk in AZURE. Thanks!
Hello, I have 2 fields I want to filter they are: name, "short name" I want to pull all the events that contains: name="software" or "short name"=software"  and  exclude: "Splunk" "Adobe" "Micro... See more...
Hello, I have 2 fields I want to filter they are: name, "short name" I want to pull all the events that contains: name="software" or "short name"=software"  and  exclude: "Splunk" "Adobe" "Microsoft".. and another 50 names for both fields I have this for the exclusion:     | regex name!="(.*)((?i)(splunk|acrobat|microsoft)(.*)" | regex "short name"!="(.*)((?i)(splunk|acrobat|microsoft)(.*)"     One question: is there a way to put this in 1 sentence instead of use duplication like above? for example:     | regex (name| "short name")!="(.*)((?i)(splunk|acrobat|microsoft)(.*)"     Thanks, xyz123
Hi All,   How can I see number of hits on a specific destination IP by using the search and reporting tab ?   Regards
Here is the regex to extract message_type based on CIM.  Could anyone make this faster than 1387 steps? https://regex101.com/r/dHbs4i/1   (?P<message_type>[^.]query|response)\:     Here is the ... See more...
Here is the regex to extract message_type based on CIM.  Could anyone make this faster than 1387 steps? https://regex101.com/r/dHbs4i/1   (?P<message_type>[^.]query|response)\:     Here is the sample syslog for DNS.   Feb 3 00:42:32 FSTAd2346.g.ae3.dns.mil tmm[19902] 2021-02-03 00:42:32 FSTAd2346.g.ae3.dns.mil qid 26818 from 10.130.27.34#13258: view none: query: a-grv-gocdzace.g.ae3.dns.mil IN A + (10.130.44.87%0) Feb 3 00:42:32 FSTAd2346.g.ae3.dns.mil tmm[19902] 2021-02-03 00:42:32 FSTAd2346.g.ae3.dns.mil qid 57035 to 10.130.27.34#13258: [NOERROR qr,aa,rd] response: a-grv-gocdzace.g.ae3.dns.mil      
I want to use "Digital Guardian" app. I installed it as explained by Splunkbase. But When I tried to enter the app, I got this warning. ""Digital Guardian" app was not fully set up. This app has s... See more...
I want to use "Digital Guardian" app. I installed it as explained by Splunkbase. But When I tried to enter the app, I got this warning. ""Digital Guardian" app was not fully set up. This app has settings properties that you can customize for this Splunk instance. Depending on the app, these properties may or may not be necessary." So, I set it up on the app settings page and pressed the Save button. Then go back to the same page as before. I'm repeating this over and over again. What should we do in this case? Help me please...