All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone! Today (6th of July)  I started to use Free license because my Enterprise Trial has been expired on 25th of May. I added data via uploading of txt file which includes 93 events but I... See more...
Hello Everyone! Today (6th of July)  I started to use Free license because my Enterprise Trial has been expired on 25th of May. I added data via uploading of txt file which includes 93 events but I can see only 4 of them. Also I got the following message  Missing or malformed messages.conf stanza for LM_LICENSE:SLAVE_WARNING__1594016147  I run the this search: index=_internal per_index_thruput earliest=-90d@d latest=now | timechart span=1d eval(sum(kb)/1024) as "Daily Indexing Volume in MB" and it says that for 90 days I used 139.38 MB which looks good. Could you please advise why I get this message and how I can fix my issue?   thank you in advance
dear all, i'm trying to merge the assets on the search head cluster members, and the merging on the member is not working only ldap assets are showing up. Also, assets and identities ES dashboards... See more...
dear all, i'm trying to merge the assets on the search head cluster members, and the merging on the member is not working only ldap assets are showing up. Also, assets and identities ES dashboards are wokring on the deployer, but on the search head cluster members they are not  can you please help ?
Hi everyone, I am unable to calculate average of the given values. However, I am getting values corresponding to min() and max(). Just to give you a bit of context, I am trying to extract response... See more...
Hi everyone, I am unable to calculate average of the given values. However, I am getting values corresponding to min() and max(). Just to give you a bit of context, I am trying to extract response time from logs and based on that I want to create a chart (probably bar- chat) presenting min, max and avg response time for successful requests. Here are few of my queries which I tried: First approach     index=nonprod source=/some/microservices/alpha-* | spath level | search level=info | search message!="Exception has occurred." | regex message="([a-z0-9[\:\/\-.?=%]+)abc/submission] resolved in \[([0-9ms\s\]]+)" | rex "resolved in \[(?<resptime>.*? )" | stats min(resptime) as Mintime max(resptime) as MaxTme avg(resptime) as AvgTime     Response => Mintime : 12237 MaxTme : 28338  AvgTime:   Then second approach ( I thought may be <resptime> is a string type and hence avg() is unable to calculate average.  So, tried to convert string to number before calculating applying stats     index=nonprod source=/some/microservices/alpha-* | spath level | search level=info | search message!="Exception has occurred." | regex message="([a-z0-9[\:\/\-.?=%]+)abc/submission] resolved in \[([0-9ms\s\]]+)" | rex "resolved in \[(?<resptime>.*? )" | eval responseTime = tonumber(resptime) | stats min(responseTime) as Mintime max(responseTime) as MaxTme avg(responseTime) as AvgTime       This approach didn't work at all.  FYI - following are the values I am getting from <resptime> when I use  " | table resptime" right after rex statement.  1 13826 2 24812 3 20494 4 26317 5 28338 6 25612 7 12237 8 13470 9 17023 10 14416 11 13979 12 24578 Also, I have also figured it out that eval also doesn't work I tried printing eval statement as table it showed 12 empty rows. Moreover, I also tried eval with if ()   "eval responseTime = if(isNum(resptime),"True",tonumber(resptime)) | table responseTime". No luck. Any help in this regard would be highly appreciated.  Thanks
Hi Guys, Can i check how can i craft the query given the following condition. I have 2 indexes IndexA and IndexB with the following filed in each index. Example as follows : IndexA Field contain... See more...
Hi Guys, Can i check how can i craft the query given the following condition. I have 2 indexes IndexA and IndexB with the following filed in each index. Example as follows : IndexA Field contains : srcIP = 10.10.10.10 cat = malicious IP 100% IndexB Field contains  : TrueClientIP = 10.10.10.10 The objective of my query is to compare "TrueClientIP" under Index B against "srcIP" under IndexA and the condition that if the "cat" field under IndexA is tag under malicious IP it will return me the count . How can i craft the above query ? Thanks for the help.  
Hi, I'm having an issue with the Template for Citrix XenApp. The Environment Overview page shows Machines & Application count as 250. Rebuilding the lookup doesn't make any change either. I have t... See more...
Hi, I'm having an issue with the Template for Citrix XenApp. The Environment Overview page shows Machines & Application count as 250. Rebuilding the lookup doesn't make any change either. I have tried to disabling the following, but doesn't help.  Lookups > Lookup definitions > siteHosts > Advanced options > uncheck Case sensitive match. The lookup_sites.csv just seems to stuck at 250 Machines.   Thanks, AKN
Hi, I'm using a custom visualization app, I want to have a drill-down for last child node when clicked. and I need to capture the tree path details as well from the first parent node to the last ... See more...
Hi, I'm using a custom visualization app, I want to have a drill-down for last child node when clicked. and I need to capture the tree path details as well from the first parent node to the last child node. Regards Anil  
We have a Tomcat running and to collect JMX logs is it mandatory to have Heavy forwarder installed.?I know we cannot install JMX add-on for UF. Is there any alternative method other than having HF in... See more...
We have a Tomcat running and to collect JMX logs is it mandatory to have Heavy forwarder installed.?I know we cannot install JMX add-on for UF. Is there any alternative method other than having HF installed.?and catalina,localhost,and manager logs can be collected through UF right.?correct me if I am wrong
Hi Guys, I´m facing an issue with the KV Store.   The Status of the KV Store stands still on Starting:   This member: backupRestoreStatus : Ready ... See more...
Hi Guys, I´m facing an issue with the KV Store.   The Status of the KV Store stands still on Starting:   This member: backupRestoreStatus : Ready disabled : 0 guid : 987EAE0D-F75D-401C-B21F-4E640CBC9019 port : 8191 standalone : 1 status : starting   The mongod.log says:   I ACCESS [main] permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open I already have set the permission on chmod 400 to the /mongo/splunk.key, but it still shows this message after restarting splunk   2020-07-06T05:34:24.156Z E - [rsSync] Assertion: 17322:write to oplog failed: DocTooLargeForCapped: document doesn't fit in capped collection. size: 124 storageSize:209715200 src/mongo/db/repl/oplog.cpp 150 2020-07-06T05:34:24.156Z F - [rsSync] terminate() called. An exception is active; attempting to gather more information 2020-07-06T05:34:24.164Z F - [rsSync] DBException::toString(): Location17322: write to oplog failed: DocTooLargeForCapped: document doesn't fit in capped collection. size: 124 storageSize:209715200 Actual exception type: mongo::error_details::throwExceptionForStatus(mongo::Status const&)::NonspecificAssertionException 0x55738d1dee21 0x55738d1de805 0x55738d2d36e6 0x55738d367b39 0x55738d2d3085 0x55738d370083 0x55738d3708d7 0x55738c05cdd1 0x55738bd33ee4 0x55738bd3468f 0x55738bcb0c4c 0x55738d2ee8a0 0x7fdfa06c4724 0x7fdfa0403e8d ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"55738AF1E000","o":"22C0E21","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55738AF1E000","o":"22C0805"},{"b":"55738AF1E000","o":"23B56E6","s":"_ZN10__cxxabiv111__terminateEPFvvE"},{"b":"55738AF1E000","o":"2449B39"},{"b":"55738AF1E000","o":"23B5085","s":"__gxx_personality_v0"},{"b":"55738AF1E000","o":"2452083"},{"b":"55738AF1E000","o":"24528D7"},{"b":"55738AF1E000","o":"113EDD1","s":"_ZN5mongo4repl26ReplicationCoordinatorImpl19signalDrainCompleteEPNS_16OperationContextEx"},{"b":"55738AF1E000","o":"E15EE4","s":"_ZN5mongo4repl8SyncTail17_oplogApplicationEPNS0_22ReplicationCoordinatorEPNS1_14OpQueueBatcherE"},{"b":"55738AF1E000","o":"E1668F","s":"_ZN5mongo4repl8SyncTail16oplogApplicationEPNS0_22ReplicationCoordinatorE"},{"b":"55738AF1E000","o":"D92C4C","s":"_ZN5mongo4repl10RSDataSync4_runEv"},{"b":"55738AF1E000","o":"23D08A0"},{"b":"7FDFA06BC000","o":"8724"},{"b":"7FDFA0317000","o":"ECE8D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.6.17-SERVER-42525-splunk", "gitVersion" : "226949cc252af265483afbf859b446590b09b098", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.4.180-94.97-default", "version" : "#1 SMP Tue Jun 11 08:19:03 UTC 2019 (c0632f4)", "machine" : "x86_64" }, "somap" : [ { "b" : "55738AF1E000", "elfType" : 3 }, { "b" : "7FFF26F76000", "path" : "linux-vdso.so.1", "elfType" : 3 }, { "b" : "7FDFA12C3000", "path" : "/lib64/libresolv.so.2", "elfType" : 3 }, { "b" : "7FDFA0FE2000", "path" : "/opt/splunk/lib/libcrypto.so.1.0.0", "elfType" : 3 }, { "b" : "7FDFA1669000", "path" : "/opt/splunk/lib/libssl.so.1.0.0", "elfType" : 3 }, { "b" : "7FDFA0DDE000", "path" : "/lib64/libdl.so.2", "elfType" : 3 }, { "b" : "7FDFA0BD6000", "path" : "/lib64/librt.so.1", "elfType" : 3 }, { "b" : "7FDFA08D9000", "path" : "/lib64/libm.so.6", "elfType" : 3 }, { "b" : "7FDFA06BC000", "path" : "/lib64/libpthread.so.0", "elfType" : 3 }, { "b" : "7FDFA0317000", "path" : "/lib64/libc.so.6", "elfType" : 3 }, { "b" : "7FDFA14DA000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 }, { "b" : "7FDFA164B000", "path" : "/opt/splunk/lib/libz.so.1", "elfType" : 3 } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55738d1dee21] mongod(+0x22C0805) [0x55738d1de805] mongod(_ZN10__cxxabiv111__terminateEPFvvE+0x6) [0x55738d2d36e6] mongod(+0x2449B39) [0x55738d367b39] mongod(__gxx_personality_v0+0x2B5) [0x55738d2d3085] mongod(+0x2452083) [0x55738d370083] mongod(+0x24528D7) [0x55738d3708d7] mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl19signalDrainCompleteEPNS_16OperationContextEx+0x511) [0x55738c05cdd1] mongod(_ZN5mongo4repl8SyncTail17_oplogApplicationEPNS0_22ReplicationCoordinatorEPNS1_14OpQueueBatcherE+0xC04) [0x55738bd33ee4] mongod(_ZN5mongo4repl8SyncTail16oplogApplicationEPNS0_22ReplicationCoordinatorE+0x13F) [0x55738bd3468f] mongod(_ZN5mongo4repl10RSDataSync4_runEv+0x11C) [0x55738bcb0c4c] mongod(+0x23D08A0) [0x55738d2ee8a0] libpthread.so.0(+0x8724) [0x7fdfa06c4724] libc.so.6(clone+0x6D) [0x7fdfa0403e8d] ----- END BACKTRACE -----   Could please someone help me with this?
I have a deployed a scripted input with source=perfmon_script that gets server and workstation data.     in props.conf I have:       [source::perfmon_script] TRANSFORMS-changesourcetype = sour... See more...
I have a deployed a scripted input with source=perfmon_script that gets server and workstation data.     in props.conf I have:       [source::perfmon_script] TRANSFORMS-changesourcetype = sourcetype_new       in transforms.conf       [sourcetype_new] REGEX = . FORMAT = sourcetype::somesrctype DEST_KEY = MetaData::Sourcetype       Sourcetype not changing. What am I doing wrong?  
Greetings. I installed G Suite for splunk and got data. However Gmail and Drive quota value is looks strange, our company is using G-Suite Enterprise and it is unlimited quota for Mail and Drive. ... See more...
Greetings. I installed G Suite for splunk and got data. However Gmail and Drive quota value is looks strange, our company is using G-Suite Enterprise and it is unlimited quota for Mail and Drive. Is this affecting for this? If any one knows any solution, please let me know. Best Regards.  
Hi my code is as follows: DESCRIPTION="* sump *" OR (DESCRIPTION="* ejector pump *" AND DESCRIPTION="* run/stop *") (VALUE="RUN" OR VALUE="STOP" OR VALUE="TRIP") ASSET_NAME="*TAM/*" | eval TIMEONLY ... See more...
Hi my code is as follows: DESCRIPTION="* sump *" OR (DESCRIPTION="* ejector pump *" AND DESCRIPTION="* run/stop *") (VALUE="RUN" OR VALUE="STOP" OR VALUE="TRIP") ASSET_NAME="*TAM/*" | eval TIMEONLY =strptime(CREATEDATETIME ,"%d/%m/%Y %I:%M:%S %p") | eval _time=TIMEONLY | rex field=VALUE mode=sed "s/TRIP/STOP/g" | rex field=DESCRIPTION mode=sed "s/Trip/Run\/Stop/g" | rex field=ASSET_NAME "^(?<LOCATION>[^/]+)" | streamstats count(eval(VALUE="STOP")) AS TransactionID BY ASSET_NAME DESCRIPTION | stats range(_time) AS duration list(VALUE) AS VALUES min(_time) AS _time BY TransactionID ASSET_NAME DESCRIPTION | eval newfield=if(duration>=1800,1,null) | sort by ASSET_NAME part of result i get: i would like to ask if there is a code which i can write so that under my description it can check that my Pumps are always working in alternating example  STN DR Sump Pump 01 Run/Stop Status: DR Pump RM 01  run and stop follow by  STN DR Sump Pump 02 Run/Stop Status: DR Pump RM 01 then  STN DR Sump Pump 01 Run/Stop Status: DR Pump RM 01 if there happen that the run/stop did not alternate it will have an alert or flag out abnormally or something
Please help me with the below  query  I am using below query to extract array of json data search storeAction="storeOffline" | eval OfflineStoreID = spath(_raw,"stores{}") I am able to evaluate... See more...
Please help me with the below  query  I am using below query to extract array of json data search storeAction="storeOffline" | eval OfflineStoreID = spath(_raw,"stores{}") I am able to evaluate the list, like TestT001 TestT002 Test0000 Test1000 Test2000 Test3000 I want the list which should have only ID's and I should remove Test. which should be as below T001 T002 0000 1000 2000 3000 Please let me know how to do this.  
Hi, Below is the result from my transaction command. How do I extract only one date from the  multiple dates below? I only need the first one which is 2020-07-05 22:02:01.     2020-07-05 22:02:0... See more...
Hi, Below is the result from my transaction command. How do I extract only one date from the  multiple dates below? I only need the first one which is 2020-07-05 22:02:01.     2020-07-05 22:02:01 2020-07-05 22:02:36 2020-07-05 22:02:58 2020-07-06 03:02:41     I tried mvindex and split but it doesnt give me a result.   Thanks,
Hi, Why splunk correlation searches not running on SplunkEnterpriseSecurity App ? but correlation search run another app for example search and reporting app ES versiyon 6.2.0 Splunk Version ... See more...
Hi, Why splunk correlation searches not running on SplunkEnterpriseSecurity App ? but correlation search run another app for example search and reporting app ES versiyon 6.2.0 Splunk Version 8.0.4 for example : Substantial Increase In Intrusion Events I upload screenshoot on event.    
Dear Splunkers, I am trying to achieve below and would like to ask for help in suggestions, solutions or pointers for the same. Scenario: I have two database tables A and B and both are related by... See more...
Dear Splunkers, I am trying to achieve below and would like to ask for help in suggestions, solutions or pointers for the same. Scenario: I have two database tables A and B and both are related by unique identifier (i.e. order number). We have a situation where there are cases which are taking more time to process the orders (say more than 15 minutes) which can be found from table A. Table B has data in terms of events occurred during order placed and order served.    1) We would like to see what's going on for the orders which is taking more time than 15 minutes. There could be reasons like rush hours, counter operator is not available, more customers due to some offers or something.  2) How best can we derive the patterns for the give data? How best we can write searches and create reports or dashboards to achieve the above scenario to demonstrate operational efficiency of a store?  Your help is highly appreciated.
Hello, I'm trying to use Splunk Add-on for Microsoft Office 365 to collect service status from O365 Via azure API. I have configuration that each 5 minutes i'm asking about service status and i have... See more...
Hello, I'm trying to use Splunk Add-on for Microsoft Office 365 to collect service status from O365 Via azure API. I have configuration that each 5 minutes i'm asking about service status and i have noticed that for a few days in rows it works but afterwards Splunk receive events for certain sourcetype only once per day at 2am. The problem is only with sourcetype: o365:service:status. Another sourcetype form the same addon: sourcetype o365:management:activity works all the time without problem. Has anyone similar problem? There is some limitation here? or Azure API is unstable? addon version 2.0.2, Audit Log Search is enabled.
Hi Experts, I am trying to create a tag based on event occurrence. for example if domain=web event occurred ,then automatically create a tag for it using macro. Please help me on this. Thanks in adv... See more...
Hi Experts, I am trying to create a tag based on event occurrence. for example if domain=web event occurred ,then automatically create a tag for it using macro. Please help me on this. Thanks in advance.
I've below Splunk architecture in my environment. Universal Forwarders (Linux and Windows) -> Heavy Forwarder -> Indexers (cluster) I want to know where the index-time field extraction will happen?... See more...
I've below Splunk architecture in my environment. Universal Forwarders (Linux and Windows) -> Heavy Forwarder -> Indexers (cluster) I want to know where the index-time field extraction will happen? On Heavy forwarder On Indexers On both (in this case, will indexer overrides the fields extracted by HF?)   Are there any specific properties in props/transforms which execute on a specific component in a distributed environment?
I tried installing Splunk Enterprise 60 day trial and after providing Username and password, I am getting this error, "you do not have sufficient privileges to complete this installations for all use... See more...
I tried installing Splunk Enterprise 60 day trial and after providing Username and password, I am getting this error, "you do not have sufficient privileges to complete this installations for all users of the machine. log on as administrator and then retry this installation."  Please suggest
Hello,    I'm trying to add msg value to slack and email trigger alert but I'm getting the first word of msg field.  I would like to get the full message.  here's the search results:  here's... See more...
Hello,    I'm trying to add msg value to slack and email trigger alert but I'm getting the first word of msg field.  I would like to get the full message.  here's the search results:  here's the slack trigger: I used $results.msg$  this is what I get:    now, I want to get the full message.  instead of "Test", I want to forward the full message from msg field: "Test message generated successfully"  could you please assist?  Thanks,