All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have an existing high volume index and have discovered a chunk of event logs within the index that would be a great canidate to convert to metrics.  Can you filter these type of events to ... See more...
Hello, I have an existing high volume index and have discovered a chunk of event logs within the index that would be a great canidate to convert to metrics.  Can you filter these type of events to send to the metrics index and then convert the events to metrics at index time all using props/transforms? I have this props.conf  [my_highvol_sourcetype] TRANSFORMS-routetoIndex = route_to_metrics_index Transforms.conf [route_to_metrics_index] REGEX = cpuUtilization\= DEST_KEY=_MetaData:Index FORMAT = my_metrics_index But now what sourcetype do I use to apply the event log to metrics conversion settings?  Should I filter this dataset to a new sourcetype within my high volume index so I can apply my event log to metrics to all events matching the new sourcetype then filter to the metrics index? Any thoughts would be helpful to see if something like this is possible to do using props/transforms.
I have a significant number of dashboards that use dbxquery to pull data from a significant number of servers running many nosql databases (>20) with standardized collection names(>20). I have databa... See more...
I have a significant number of dashboards that use dbxquery to pull data from a significant number of servers running many nosql databases (>20) with standardized collection names(>20). I have database connections defined for each server/database combination:  I'm currently using a simple dbxquery in search to pull data from these collections:         |dbxquery connection=$server_name$_database_name query ="SELECT * FROM collection_name" |(numerous transformations)         This works fine. Unfortunately, there's a lot of field transformations, json processing, etc. that needs to happen after the query, and its always the standard 8-10 lines. I'd like to standardize these queries and imbed them in a macro. I'd like to bundle all of this in a macro like this:               `collection_name(server_name)`         The problem is that |dbxquery doesn't appear to like being the first command in a macro.          Error in 'dbxquery' command: This command must be the first command of a search. The search job has failed due to an error. You may be able view the job in the Job Inspector.         Any ideas how to implement this macro in a clean way? 
Hello, I am new to splunk. I need to get the top 5 products sold for each day, for the last 7 days. The products could be different each day, as shown in the example below.   Day (X-Axis) ... See more...
Hello, I am new to splunk. I need to get the top 5 products sold for each day, for the last 7 days. The products could be different each day, as shown in the example below.   Day (X-Axis) Top 5 Products (Y-Axis) 1 2 3 4 5 1 P1 PA P4 AC ZX 2 P2 PB P5 AR P1 3 P3  PC PA P5 AC 4 P4 P1 P1 P4 AR 5 P5 PD AB AX AB Is there a way to get it done? I tired the following but it gives me the same 5 products for all days and puts everything else in "OTHER" bucket: [my search] | table _time, Product | timechart count(Product) byProduct WHERE max in top5
Splunk 9.0.0 on Windows servers  So I clicked on Apps \ Enterprise Security and I was greeted with that error App configuration The "Enterprise Security" app has not been fully configured yet. ... See more...
Splunk 9.0.0 on Windows servers  So I clicked on Apps \ Enterprise Security and I was greeted with that error App configuration The "Enterprise Security" app has not been fully configured yet. This app has configuration properties that can be customized for this Splunk instance. Depending on the app, these properties may or may not be required. Unknown search command 'essinstall'. OK
Hi All, I am trying to list all tokens via splunk http-event-collector cli and it retruned error as below: [centos8-1 mycerts]$ ~/splunk/bin/splunk http-event-collector list -uri https://centos8-1:... See more...
Hi All, I am trying to list all tokens via splunk http-event-collector cli and it retruned error as below: [centos8-1 mycerts]$ ~/splunk/bin/splunk http-event-collector list -uri https://centos8-1:8089 ERROR: certificate validation: self signed certificate in certificate chain Cannot connect Splunk server I used openssl to try to connect to my server, it returned code 0. However, if I used the splunk openssl, it will return code 19. And from splunkd.log it said: 01-14-2023 01:25:22.088 +0800 WARN HttpListener [75758 HttpDedicatedIoThread-6] - Socket error from 192.168.30.128:59764 while idling: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. Once I commented out cliVerifyServerName in servers.conf, the cli works but with warning as below: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. May I know if I missed any configuration here? The cert is generated on my own and indeed it is self-signed one.
I have events like below -a3bcd: Info1234x:NullValue -a3bcd: Info1234x:NullValue -b3bcd: Info1234x:NullValue2 -c3bcd: Info1234x:NullValue3   I managed to produce a table like these ErrorInfo  ... See more...
I have events like below -a3bcd: Info1234x:NullValue -a3bcd: Info1234x:NullValue -b3bcd: Info1234x:NullValue2 -c3bcd: Info1234x:NullValue3   I managed to produce a table like these ErrorInfo                                                     Count a3bcd: Info1234x:NullValue               2 -b3bcd: Info1234x:NullValue2           1 -c3bcd: Info1234x:NullValue3           1 I would like to condense those events into one since they are all same kind of error just different paramter so it would be like ErrorInfo                                                     Count Info1234x:                                                   1 Thanks in advance        
Hello,   if I want to send a job in background from a dashboard I have to Open in Search and after that I can perform the choice. I wonder if there is some trick to directly send a dashboard job ... See more...
Hello,   if I want to send a job in background from a dashboard I have to Open in Search and after that I can perform the choice. I wonder if there is some trick to directly send a dashboard job to background. Do you have any hints? Thank you and happy splunking Marco
Hi, I'm trying to onboard NSG Flow Logs and while I have managed to break the events into the specific tuples as per this link [https://answers.splunk.com/answers/714696/process-json-azure-nsg-flow... See more...
Hi, I'm trying to onboard NSG Flow Logs and while I have managed to break the events into the specific tuples as per this link [https://answers.splunk.com/answers/714696/process-json-azure-nsg-flow-log-tuples.html?_ga=2.123284427.1721356178.1673537284-343068763.1657544022] I lose a lot of useful information that I need such as "rule" does anyone have any ideas? { "records": [ { "time": "2017-02-16T22:00:32.8950000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, "flows": [ { "rule": "DefaultRule_DenyAllInBound", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D" ] } ] }, { "rule": "UserRule_default-allow-rdp", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A", "1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A", "1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A", "1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A" ] } ] } ] } }, { "time": "2017-02-16T22:01:32.8960000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, "flows": [ { "rule": "DefaultRule_DenyAllInBound", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D" ] } ] }, { "rule": "UserRule_default-allow-rdp", "flows": [ { "mac": "000D3AF8801A", "flowTuples": [ "1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A", "1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A", "1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A" ] } ] } ] } }, "records": [ { "time": "2017-02-16T22:00:32.8950000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282421,42.119.146.95,10.1.0.4,51529,5358,T,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282370,163.28.66.17,10.1.0.4,61771,3389,T,I,A","1487282393,5.39.218.34,10.1.0.4,58596,3389,T,I,A","1487282393,91.224.160.154,10.1.0.4,61540,3389,T,I,A","1487282423,13.76.89.229,10.1.0.4,53163,3389,T,I,A"]}]}]} } , { "time": "2017-02-16T22:01:32.8960000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282481,195.78.210.194,10.1.0.4,53,1732,U,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282435,61.129.251.68,10.1.0.4,57776,3389,T,I,A","1487282454,84.25.174.170,10.1.0.4,59085,3389,T,I,A","1487282477,77.68.9.50,10.1.0.4,65078,3389,T,I,A"]}]}]} } , { "time": "2017-02-16T22:02:32.9040000Z", "systemId": "2c002c16-72f3-4dc5-b391-3444c3527434", "category": "NetworkSecurityGroupFlowEvent", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": {"Version":1,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282492,175.182.69.29,10.1.0.4,28918,5358,T,I,D","1487282505,71.6.216.55,10.1.0.4,8080,8080,T,I,D"]}]},{"rule":"UserRule_default-allow-rdp","flows":[{"mac":"000D3AF8801A","flowTuples":["1487282512,91.224.160.154,10.1.0.4,59046,3389,T,I,A"]}]}]} }
I have the following stats search:     index=servers1 OR index=servers2 DBNAME=DATABASENAME source="/my/log/source/*" | stats list(Tablespace), list("Total MB"), list("Used MB"), list("Free MB"... See more...
I have the following stats search:     index=servers1 OR index=servers2 DBNAME=DATABASENAME source="/my/log/source/*" | stats list(Tablespace), list("Total MB"), list("Used MB"), list("Free MB"), list("Percent Free") by DBNAME     Which is supposed to produce the following results: DBNAME list(Tablespace) list(Total MB) list(Used MB) list(Free MB) list(Percent Free) DATABASENAME RED_DATA 165890 46350 119540 72   BLUE_DATA 2116 1016 1100 52   PINK_DATA 10 0 10 100   PURPLE_DATA 34 17 17 50   GREEN_DATA 6717 0 6717 100   ORANGE_DATA 51323 295 51028 99 …. Cont'd for 20+ rows           About 25-30% of the time (as both a live search and scheduled report) I get the following results instead. The list(Used MB) column will have extra garbage data which appears to be job search components: DBNAME list(Tablespace) list(Total MB) list(Used MB) list(Free MB) list(Percent Free) DATABASENAME RED_DATA 165890 46350 119540 72   BLUE_DATA 2116 1016 1100 52   PINK_DATA 10 0 10 100   PURPLE_DATA 34 17 17 50   GREEN_DATA 6717 0 6717 100   ORANGE_DATA 51323 295 51028 99       ion.command.search           4           duration.command.search.calcfields           0           duration.command.search.fieldalias           0           duration.command.search.filter           0           duration.command.search.index           2           duration.command.search.index.usec_1_8           0           duration.command.search.index.usec_8_64           0           duration.command.search.kv           0           duration.command.search.lookups     …. Cont'd for 20+ rows           I've adjusted the following limits.conf Search Head settings with no luck:     [stats] maxresultrows = 50000 maxvalues = 10000 maxvaluesize = 10000 list_maxsize = 10000   The search inspector also does not produce any notable error messages. Any ideas as to what is happening here and how I can solve it?
Hi, While pushing custom created application from master to search heads, am getting the below, error. Output - [root@ip-172-31-23-159 bin]# ./splunk apply shcluster-bundle -target https:172.31.22.... See more...
Hi, While pushing custom created application from master to search heads, am getting the below, error. Output - [root@ip-172-31-23-159 bin]# ./splunk apply shcluster-bundle -target https:172.31.22.82:8089 Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]: y WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Error in pre-deploy check, uri=?/services/shcluster/captain/kvstore-upgrade/status, status=502, error=Cannot resolve hostname [root@ip-172-31-23-159 bin]#
I am trying to drilldown within a dashboard. I wish set a token-value with relative_time, using a dynamic relative time specifier input-variable. If I set the relative time specifier to "+1h" it wor... See more...
I am trying to drilldown within a dashboard. I wish set a token-value with relative_time, using a dynamic relative time specifier input-variable. If I set the relative time specifier to "+1h" it works fine: <eval token="endTime_token">relative_time($startTime_token$, "+1h")</eval> But when I use a token with value "1h" it does not: <eval token="endTime_token2">relative_time($startTime_token$, "+$resultion_token$"</eval> I paste my complete code as reference:     <form> <label>Drilldown-lab</label> <fieldset submitButton="false"> <input type="time" token="period_token"> <label>Period</label> <default> <earliest>-24h@h</earliest> <latest>@h</latest> </default> </input> <input type="dropdown" token="resolution_token"> <label>Resolution</label> <choice value="15m">15 minutes</choice> <choice value="1h">1 hour</choice> <default>1h</default> </input> </fieldset> <row> <panel> <title>Overview-panel</title> <table> <search> <query> index="my_index" | bin _time span=$resolution_token$ | eval startTime = strftime(_time, "%Y-%m-%d %H:%M") | stats count by startTime </query> <earliest>$period_token.earliest$</earliest> <latest>$period_token.latest$</latest> </search> <option name="drilldown">row</option> <drilldown> <eval token="startTime_token">strptime($row.startTime$, "%Y-%m-%d %H:%M")</eval> <eval token="endTime_token">relative_time($startTime_token$, "+1h")</eval> <eval token="endTime_token2">relative_time($startTime_token$, "+$resultion_token$"</eval> </drilldown> </table> </panel> </row> <row> <panel depends="$startTime_token$"> <title>Drilldown-panel $endTime_token$, $endTime_token2$</title> <table> <search> <query> index="my_index" | stats avg(responseTimeMs) as Responsetime_avg count by assetId </query> <earliest>$startTime_token$</earliest> <latest>$endTime_token$</latest> </search> </table> </panel> </row> </form>        I am able to calculate the endTime-value in my query (SPL), I would prefer to be able to set it with "eval token".
Hello, I'm looking to create a query that helps to search the following conditions. For example, get the address for 1. John from Spain  2. Jane from London  3. Terry from France My current meth... See more...
Hello, I'm looking to create a query that helps to search the following conditions. For example, get the address for 1. John from Spain  2. Jane from London  3. Terry from France My current methodology is to run each query one by one for each examples. index IN ( sampleIndex) John AND Spain | stats name, country, address After running the above query, I run for the next example. index IN ( sampleIndex) Jane AND London | stats name, country, address Running 1 query for 1 example will become tedious if I have thousand of examples to go through. It is possible to get some help on creating query that help to run similar logic like the following, index IN ( sampleIndex) Jane AND London OR John AND Spain OR  Terry AND France | stats name, country, address Sorry if my question isn't clear.
Hi All, we are working on to create a dashboard on UF status connection by using phone home interval in DS using search below but while checking if forwarder phone connection last few secs in DS st... See more...
Hi All, we are working on to create a dashboard on UF status connection by using phone home interval in DS using search below but while checking if forwarder phone connection last few secs in DS still it was showing as not connected in list. please let us know what needs to be changes in the search to get exact result and also search was running slowly.   index=_internal source=*metrics.log group=tcpin_connections earliest=-2d@d | eval Host=coalesce(hostname, sourceHost) | eval age=(now()-_time) | stats min(age) AS age max(_time) AS LastTime BY Host | convert ctime(LastTime) AS "Last Active On" | eval Status=if(age< 1800,"Running","DOWN") | rename age AS Age | eval Age=tostring(Age,"duration") | sort Status | dedup Host | table Host Status Age "Last Active On", | where Status="DOWN"
Hello, I have a dashboard studio dashboard that uses a png image as a background image.  The app I am working in and the dashboard itself are both set to allow read for everyone. When a user with a... See more...
Hello, I have a dashboard studio dashboard that uses a png image as a background image.  The app I am working in and the dashboard itself are both set to allow read for everyone. When a user with admin role loads the dashboard then everything is visible, but when a user with user role loads the dashboard then the backgroud image does not load.  I have tried setting the image as a normal image rather than the background image but it does not change anything.  All non-image elements load without issue. Why can a user with user role not see the images?  Splunk version 8.2.6. Thank you and best regards, Andrew
Hi folks,   I need a quick clarification, I need to know if I use a whitelist function on inputs.conf I will saving the license like the drop event configuration by props and transforms. thanks... See more...
Hi folks,   I need a quick clarification, I need to know if I use a whitelist function on inputs.conf I will saving the license like the drop event configuration by props and transforms. thanks in advance Regards Alessandro
I'm using 3.10 version of DB connect app, but on one of the Heavy forwarder I'm getting below error. Invalid key in stanza [APP NAME] in /SPLUNK/splunk/etc/apps/CONNECTION_NAME/local/db_inputs.conf... See more...
I'm using 3.10 version of DB connect app, but on one of the Heavy forwarder I'm getting below error. Invalid key in stanza [APP NAME] in /SPLUNK/splunk/etc/apps/CONNECTION_NAME/local/db_inputs.conf, line 235: checkpoint_key (value: 638de13816f31a4b64728e8c). Is there any way to fix these ?
We recently upgraded our cluster from splunk 8.1.0.1 to splunk 9.0.2 and the KVstore on SH cluster were manually upgraded to WiredTiger. We could see that cluster manager and peer nodes were autom... See more...
We recently upgraded our cluster from splunk 8.1.0.1 to splunk 9.0.2 and the KVstore on SH cluster were manually upgraded to WiredTiger. We could see that cluster manager and peer nodes were automatically upgraded to WiredTiger mostly, however some indexer peers failed in this. Please find the related error messages from the mongodb.log below.    Its not clear why exactly this happened. Is there a manual way to recover and migrate? --------------- 2023-01-12T08:44:16.890Z I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /usr/local/akamai/splunk/var/lib/splunk/kvstore/mongo created by the 'mmapv1' storage engine, but the specified storage engine was 'wiredTiger'., terminating 2023-01-12T08:44:16.890Z I REPL [initandlisten] Stepping down the ReplicationCoordinator for shutdown, waitTime: 10000ms 2023-01-12T08:44:16.890Z I NETWORK [initandlisten] shutdown: going to close listening sockets... 2023-01-12T08:44:16.890Z I NETWORK [initandlisten] Shutting down the global connection pool 2023-01-12T08:44:16.890Z I - [initandlisten] Killing all operations for shutdown 2023-01-12T08:44:16.890Z I NETWORK [initandlisten] Shutting down the ReplicaSetMonitor 2023-01-12T08:44:16.890Z I CONTROL [initandlisten] Shutting down free monitoring 2023-01-12T08:44:16.890Z I FTDC [initandlisten] Shutting down full-time data capture 2023-01-12T08:44:16.890Z I STORAGE [initandlisten] Shutting down the HealthLog 2023-01-12T08:44:16.890Z I - [initandlisten] Dropping the scope cache for shutdown 2023-01-12T08:44:16.890Z I CONTROL [initandlisten] now exiting 2023-01-12T08:44:16.890Z I CONTROL [initandlisten] shutting down with code:100 --------------- Cluster details – splunk multisite ------------------------- 4 SH (site1) 4 SH (site2) 11 IDX (site1) 11 IDX (site2) Master1(Site1) Master2(Standby at site2)
Hi,  I looking for rex sed cmd to extract the value from the field. eg:  input field1 = d:\AppDynamics\machineagent\ver22.2.0.3282\bin\MachineAgentService.exe output = ver22.2.0.3282 I need ... See more...
Hi,  I looking for rex sed cmd to extract the value from the field. eg:  input field1 = d:\AppDynamics\machineagent\ver22.2.0.3282\bin\MachineAgentService.exe output = ver22.2.0.3282 I need a valid sed cmd to filter the value everything before 3rd backslash and after 4th backslash. eg: |rex field=version mode=sed "s/ /\*/g" Thanks, Babu
I have and issues with red status :   The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden bu... See more...
I have and issues with red status :   The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. i check in Indexing Performance: Instance and almost field had 100%  and when i check CPU and memory used and license used it had alot space     so how can i find the issues and can i fix this problem       
Hey people, I am trying to convert the execution time which I get in ms to duration format | rex "EXECUTION_TIME : (?<totalTime>[^ms]+)"   I did also try something like this | eval inSec = in... See more...
Hey people, I am trying to convert the execution time which I get in ms to duration format | rex "EXECUTION_TIME : (?<totalTime>[^ms]+)"   I did also try something like this | eval inSec = inMs / 1000 | fieldformat inSec = tostring(inSec, "duration")   but it is giving me null value Could you please help me out here