All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I added $export HISTTIMEFORMAT='%F %T' to /root/.bashrc instead of /etc/profile.d to test the HISTTIMEFORMAT setting. 1. However, in Splunk, timestamps and command are recognized as different even... See more...
I added $export HISTTIMEFORMAT='%F %T' to /root/.bashrc instead of /etc/profile.d to test the HISTTIMEFORMAT setting. 1. However, in Splunk, timestamps and command are recognized as different events and searched. rm -rf local/ -> event 1 #1714721901 -> event 2 cd /opt/splunkforwarder/etc/apps/ -> event 3 #1714721771 -> event 4 2. For the timestamps test, I added a setting to another Splunk's props.conf  that works well. [test_bash_history] BREAK_ONLY_BEFORE = #(?=\d+) MAX_TIMESTAMP_LOOKAHEAD = 11 TIME_PREFIX = # TIME_FORMAT = #%s Is this setting correct?
According to the documentation for search time modifiers you should be correct. Although example 4 and 5 on that page uses a different time format. Try the format from the examples.
Sourcetype  is important because it categorises the raw data and should extract / parse the data into fields.  From the screen shot it looks like your data is not being parsed/extracted on the SH.  ... See more...
Sourcetype  is important because it categorises the raw data and should extract / parse the data into fields.  From the screen shot it looks like your data is not being parsed/extracted on the SH.  1. You most likely do not have the correct sourcetype or TA installed for you TA.     2. Obviously this is firewall data (I have never heard of a sourcetype firewall, but it could be a custom name, normally its called or set with a meaningful name like cisco:asa   etc.  Run this command and see if it returns any sourcetypes, if it still doesn't, identify the vendor of the firewall logs, find the TA in Splunk base, look at how you are ingesting this data, inputs and check and note the metadata settings, use the sourcetype from there. If not you will have to develop a custom one for this data source. | tstats count where index=firewall BY sourcetype, index | stats values(sourcetype) BY index .     
Hi, you can easily see the usage data in the Management Console. There are searches behind the panels (little search icon in the lower right of each panel). The data for the searches is in your Splu... See more...
Hi, you can easily see the usage data in the Management Console. There are searches behind the panels (little search icon in the lower right of each panel). The data for the searches is in your Splunk Environment. You can now take and modify those searches to time bucket by hour and/or month. It would make sense to save this search as a report, let the report run regularly (scheduled) and use a summary index for the results. After that you can use the REST API on your Splunk SearchHead to trigger a search against the summarised data and get the results as a csv or json.  I can't give you a specific search cause that depends on your license model and Splunk version. But the panels in the Management Console Dashboards give you the searches you need to start from.
The timechart command neeeeeeeeeeeeeds a _time field for the time bucketing. With your stats command you have not done a "by _time" and with that ignored / "eliminated" the _time field from the resul... See more...
The timechart command neeeeeeeeeeeeeds a _time field for the time bucketing. With your stats command you have not done a "by _time" and with that ignored / "eliminated" the _time field from the results. That means the field is no longer available for any command after the stats line. You would have to at least add the _time field to the by clause of the stats command. With that said... I think your "append" with the inputlookup would create results where there is no "_time" field at the end of the resultset. So it will be funny to see what stats does with those results. My recommendation is to either try to just do the lookup without the append or eval some _time field with a value that makes sense to the inputlookup append. 
First, thanks for posting data in text.  Second, it's a huge risk posting text data without code box.  See how many smily faces you sprinkled all over.  Let me clean up for you here:     event": "... See more...
First, thanks for posting data in text.  Second, it's a huge risk posting text data without code box.  See how many smily faces you sprinkled all over.  Let me clean up for you here:     event": "{\"eventVersion\":\"1.08\",\"userIdentity\":{\"type\":\"AssumedRole\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ:redlock\",\"arn\":\"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\",\"accountId\":\"533267265705\",\"accessKeyId\":\"ASIAXYKJUXCUSTP25SUE\",\"sessionContext\":{\"sessionIssuer\":{\"type\":\"Role\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ\",\"arn\":\"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\",\"accountId\":\"533267265705\",\"userName\":\"PrismaCloudRole-804603675133320192\"},\"webIdFederationData\":{},\"attributes\":{\"creationDate\":\"2024-05-03T00:53:45Z\",\"mfaAuthenticated\":\"false\"}}},\"eventTime\":\"2024-05-03T04:09:07Z\",\"eventSource\":\"autoscaling.amazonaws.com\",\"eventName\":\"DescribeScalingPolicies\",\"awsRegion\":\"us-west-2\",\"sourceIPAddress\":\"13.52.105.217\",\"userAgent\":\"Vert.x-WebClient/4.4.6\",\"requestParameters\":{\"maxResults\":10,\"serviceNamespace\":\"cassandra\"},\"responseElements\":null,\"additionalEventData\":{\"service\":\"application-autoscaling\"},\"requestID\":\"ef12925d-0e9a-4913-8da5-1022cfd15964\",\"eventID\":\"a1799eeb-1323-46b6-a964-efd9b2c30a8a\",\"readOnly\":true,\"eventType\":\"AwsApiCall\",\"managementEvent\":true,\"recipientAccountId\":\"533267265705\",\"eventCategory\":\"Management\",\"tlsDetails\":{\"tlsVersion\":\"TLSv1.3\",\"cipherSuite\":\"TLS_AES_128_GCM_SHA256\",\"clientProvidedHostHeader\":\"application-autoscaling.us-west-2.amazonaws.com\"}}"}     Third, and this is key.  Are you sure that's the true form of a complete event?  For one thing, it seems that there is a missing opening curly bracket ({) and a missing double quotation mark (") before the entire snippet.   If I am correct that you just forget to include the opening bracket and opening question mark, i.e., your real events look like     {"event": "{\"eventVersion\":\"1.08\",\"userIdentity\":{\"type\":\"AssumedRole\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ:redlock\",\"arn\":\"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\",\"accountId\":\"533267265705\",\"accessKeyId\":\"ASIAXYKJUXCUSTP25SUE\",\"sessionContext\":{\"sessionIssuer\":{\"type\":\"Role\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ\",\"arn\":\"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\",\"accountId\":\"533267265705\",\"userName\":\"PrismaCloudRole-804603675133320192\"},\"webIdFederationData\":{},\"attributes\":{\"creationDate\":\"2024-05-03T00:53:45Z\",\"mfaAuthenticated\":\"false\"}}},\"eventTime\":\"2024-05-03T04:09:07Z\",\"eventSource\":\"autoscaling.amazonaws.com\",\"eventName\":\"DescribeScalingPolicies\",\"awsRegion\":\"us-west-2\",\"sourceIPAddress\":\"13.52.105.217\",\"userAgent\":\"Vert.x-WebClient/4.4.6\",\"requestParameters\":{\"maxResults\":10,\"serviceNamespace\":\"cassandra\"},\"responseElements\":null,\"additionalEventData\":{\"service\":\"application-autoscaling\"},\"requestID\":\"ef12925d-0e9a-4913-8da5-1022cfd15964\",\"eventID\":\"a1799eeb-1323-46b6-a964-efd9b2c30a8a\",\"readOnly\":true,\"eventType\":\"AwsApiCall\",\"managementEvent\":true,\"recipientAccountId\":\"533267265705\",\"eventCategory\":\"Management\",\"tlsDetails\":{\"tlsVersion\":\"TLSv1.3\",\"cipherSuite\":\"TLS_AES_128_GCM_SHA256\",\"clientProvidedHostHeader\":\"application-autoscaling.us-west-2.amazonaws.com\"}}"}     you would have gotten a field "event" containing the following value     {"eventVersion":"1.08","userIdentity":{"type":"AssumedRole","principalId":"AROAXYKJUXCU7M4FXD7ZZ:redlock","arn":"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock","accountId":"533267265705","accessKeyId":"ASIAXYKJUXCUSTP25SUE","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAXYKJUXCU7M4FXD7ZZ","arn":"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192","accountId":"533267265705","userName":"PrismaCloudRole-804603675133320192"},"webIdFederationData":{},"attributes":{"creationDate":"2024-05-03T00:53:45Z","mfaAuthenticated":"false"}}},"eventTime":"2024-05-03T04:09:07Z","eventSource":"autoscaling.amazonaws.com","eventName":"DescribeScalingPolicies","awsRegion":"us-west-2","sourceIPAddress":"13.52.105.217","userAgent":"Vert.x-WebClient/4.4.6","requestParameters":{"maxResults":10,"serviceNamespace":"cassandra"},"responseElements":null,"additionalEventData":{"service":"application-autoscaling"},"requestID":"ef12925d-0e9a-4913-8da5-1022cfd15964","eventID":"a1799eeb-1323-46b6-a964-efd9b2c30a8a","readOnly":true,"eventType":"AwsApiCall","managementEvent":true,"recipientAccountId":"533267265705","eventCategory":"Management","tlsDetails":{"tlsVersion":"TLSv1.3","cipherSuite":"TLS_AES_128_GCM_SHA256","clientProvidedHostHeader":"application-autoscaling.us-west-2.amazonaws.com"}}     (By the way, event should be available whether or not you have KV_MODE=json, whether or not you have index_extraction=JSON.)  As you can see, this value is a compliant JSON.  All you need to do is to feed this field to spath.     | spath input=event     This way, if my speculation about missing bracket and quotation mark is correct, the sample you posted should give the following fields and values field name field value additionalEventData.service application-autoscaling awsRegion us-west-2 eventCategory Management eventID a1799eeb-1323-46b6-a964-efd9b2c30a8a eventName DescribeScalingPolicies eventSource autoscaling.amazonaws.com eventTime 2024-05-03T04:09:07Z eventType AwsApiCall eventVersion 1.08 managementEvent true readOnly true recipientAccountId 533267265705 requestID ef12925d-0e9a-4913-8da5-1022cfd15964 requestParameters.maxResults 10 requestParameters.serviceNamespace cassandra responseElements null sourceIPAddress 13.52.105.217 tlsDetails.cipherSuite TLS_AES_128_GCM_SHA256 tlsDetails.clientProvidedHostHeader application-autoscaling.us-west-2.amazonaws.com tlsDetails.tlsVersion TLSv1.3 userAgent Vert.x-WebClient/4.4.6 userIdentity.accessKeyId ASIAXYKJUXCUSTP25SUE userIdentity.accountId 533267265705 userIdentity.arn arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock userIdentity.principalId AROAXYKJUXCU7M4FXD7ZZ:redlock userIdentity.sessionContext.attributes.creationDate 2024-05-03T00:53:45Z userIdentity.sessionContext.attributes.mfaAuthenticated false userIdentity.sessionContext.sessionIssuer.accountId 533267265705 userIdentity.sessionContext.sessionIssuer.arn arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192 userIdentity.sessionContext.sessionIssuer.principalId AROAXYKJUXCU7M4FXD7ZZ userIdentity.sessionContext.sessionIssuer.type Role userIdentity.sessionContext.sessionIssuer.userName PrismaCloudRole-804603675133320192 userIdentity.type AssumedRole However, if your raw events truly miss the opening bracket and opening quotation mark, you need to examine your ingestion process and fix that.  No developer will knowingly omit those.   Temporarily, you can use SPL to "fix" the omission and extract data, like     | eval _raw = "{\"" . _raw | spath | spath input=event     But this is not a real solution.  Bad ingestion can do many other damage. Lastly, here is an emulation you can play with an compare with real data     | makeresults | eval _raw = "{\"event\": \"{\\\"eventVersion\\\":\\\"1.08\\\",\\\"userIdentity\\\":{\\\"type\\\":\\\"AssumedRole\\\",\\\"principalId\\\":\\\"AROAXYKJUXCU7M4FXD7ZZ:redlock\\\",\\\"arn\\\":\\\"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\\\",\\\"accountId\\\":\\\"533267265705\\\",\\\"accessKeyId\\\":\\\"ASIAXYKJUXCUSTP25SUE\\\",\\\"sessionContext\\\":{\\\"sessionIssuer\\\":{\\\"type\\\":\\\"Role\\\",\\\"principalId\\\":\\\"AROAXYKJUXCU7M4FXD7ZZ\\\",\\\"arn\\\":\\\"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\\\",\\\"accountId\\\":\\\"533267265705\\\",\\\"userName\\\":\\\"PrismaCloudRole-804603675133320192\\\"},\\\"webIdFederationData\\\":{},\\\"attributes\\\":{\\\"creationDate\\\":\\\"2024-05-03T00:53:45Z\\\",\\\"mfaAuthenticated\\\":\\\"false\\\"}}},\\\"eventTime\\\":\\\"2024-05-03T04:09:07Z\\\",\\\"eventSource\\\":\\\"autoscaling.amazonaws.com\\\",\\\"eventName\\\":\\\"DescribeScalingPolicies\\\",\\\"awsRegion\\\":\\\"us-west-2\\\",\\\"sourceIPAddress\\\":\\\"13.52.105.217\\\",\\\"userAgent\\\":\\\"Vert.x-WebClient/4.4.6\\\",\\\"requestParameters\\\":{\\\"maxResults\\\":10,\\\"serviceNamespace\\\":\\\"cassandra\\\"},\\\"responseElements\\\":null,\\\"additionalEventData\\\":{\\\"service\\\":\\\"application-autoscaling\\\"},\\\"requestID\\\":\\\"ef12925d-0e9a-4913-8da5-1022cfd15964\\\",\\\"eventID\\\":\\\"a1799eeb-1323-46b6-a964-efd9b2c30a8a\\\",\\\"readOnly\\\":true,\\\"eventType\\\":\\\"AwsApiCall\\\",\\\"managementEvent\\\":true,\\\"recipientAccountId\\\":\\\"533267265705\\\",\\\"eventCategory\\\":\\\"Management\\\",\\\"tlsDetails\\\":{\\\"tlsVersion\\\":\\\"TLSv1.3\\\",\\\"cipherSuite\\\":\\\"TLS_AES_128_GCM_SHA256\\\",\\\"clientProvidedHostHeader\\\":\\\"application-autoscaling.us-west-2.amazonaws.com\\\"}}\"}" | spath ``` data emulation above ``` | spath input=event      
@richgalloway  I have already tried using this if you see my posted questions , there i have already mentioned that filters parameter f , is not working . here is the screenshot if what i tried... See more...
@richgalloway  I have already tried using this if you see my posted questions , there i have already mentioned that filters parameter f , is not working . here is the screenshot if what i tried       
Hello, I was playing with Network Explorer feature and it looks only bandwidht metric is available on a Network Map. On the video which I found on youtube, there is a panel available where metrics c... See more...
Hello, I was playing with Network Explorer feature and it looks only bandwidht metric is available on a Network Map. On the video which I found on youtube, there is a panel available where metrics can be changed (color by...). How to enable that? Is it still available in this feature? I'd like to see either latency or packet loss instead of bandwidth. https://www.splunk.com/en_us/resources/videos/network-explorer-overview.html?locale=en_us Thanks!
Hello I have lookup file which have content like this name                   count                          time abc                          3                               04-24 cdf         ... See more...
Hello I have lookup file which have content like this name                   count                          time abc                          3                               04-24 cdf                           2                                 04-24 but i want the content of  the lookup file to be like this name                 count                   time abc                            1                       04-24 abc                           1                        04-24 abc                           1                        04-24 cdf                            1                       04-24 cdf                            1                        04-24 how will i able to do this?
  Note : this query is not for the billing ingestion using splunk add-ons' and ingestion   Splunk Observability Cloud counts the number of metric time series (MTS) sent during each hour in the mon... See more...
  Note : this query is not for the billing ingestion using splunk add-ons' and ingestion   Splunk Observability Cloud counts the number of metric time series (MTS) sent during each hour in the month how can I acess any of the billing data through api both hourly and monthly  lhttps://docs.splunk.com/observability/en/admin/subscription-usage/imm-billing.html 
Thank you @isoutamo for the help here. I have not yet implemented it, because i want to understand how resilient is whole solution. As fast as i understand we have two solutions: - forwarder ACK c... See more...
Thank you @isoutamo for the help here. I have not yet implemented it, because i want to understand how resilient is whole solution. As fast as i understand we have two solutions: - forwarder ACK configured in the outputs.conf (useAck=true) - HEC ACK configured in the inputs.conf (useAck=true) Both solutions are independent. But when i do enable only HEC ACK it effectively enables also forwarder ACK (because it can return ACK to HEC client based on the information returned back from indexer) In the HEC doc we have: "HEC responds with the status information to the client (4). The body of the reply contains the status of each of the requests that the client queried. A true status only indicates that the event that corresponds to that ackID was replicated at the desired replication factor" So effectively i need to enable useAck=true in inputs.conf - correct ? Also what happens when HEC server (my HF) will have a hardware crash before it received ACK from indexer (or even before it flush it's output queue) ? Will it be able to recover after that crash ? To do that it would have to have a kind of journal file system persistence in order to recover ? Without that if the event is lost my HEC client will try to query HEC server infinitively..... Thanks, Michal
How to get splunk billing usage data hourly and monthly through API's  
I try to plot a line graph where the x-axis is an index  and y-axis is a random value. I also trying to add an annotation where the annotationX is an index. Below is the code for the visualization. ... See more...
I try to plot a line graph where the x-axis is an index  and y-axis is a random value. I also trying to add an annotation where the annotationX is an index. Below is the code for the visualization.     "visualizations": { "viz_kHEXe45c": { "type": "splunk.area", "dataSources": { "primary": "ds_Search_1", "annotation": "ds_annotation_markers" }, "options": { "x": "> primary | seriesByIndex(0)", "annotationX": "> annotation | seriesByIndex(0)", "annotationLabel": "> annotation | seriesByIndex(1)", "annotationColor": "> annotation | seriesByIndex(2)", "nullValueDisplay": "zero" }, "title": "Test Event Annotation", "showProgressBar": false, "showLastUpdated": false } }, "dataSources": { "ds_Search_1": { "type": "ds.search", "options": { "query": "| makeresults count=15\n| streamstats count\n| eval index=count\n| eval value=random()%100\n| fields index value" }, "name": "ds_Search_1" }, "ds_annotation_markers": { "type": "ds.search", "options": { "query": "| makeresults count=3\n| streamstats count\n| eval index=count\n| eval score = random()%3 +1\n| eval status = case(score=1,\"server error detected\", score=2, \"unknown user access\", score=3, \"status cleared\")\n| eval color = case(score=1,\"#f44271\", score=2, \"#f4a941\", score=3, \"#41f49a\")\n| table index status color" }, "name": "ds_annotation_markers" } },       Below is the line graph output shown based on the code above.   Could anyone please help how to add the annotation on the line graph when the x-axis is a non-time based number type?  
I don't quite follow you. but it seems like you only need a single drop down index=$index$ (source="/log/test.log" $host$ ) | rex field=name "(?<DB>[^\.]*)" | stats count by DB What is selecti... See more...
I don't quite follow you. but it seems like you only need a single drop down index=$index$ (source="/log/test.log" $host$ ) | rex field=name "(?<DB>[^\.]*)" | stats count by DB What is selecting your index and host fields Can you share more of your dashboard XML.  Note that the above doesn't do the rename or table, as they are not necessary - just use DB as the fields for label/value rather than Database.
I am not sure what you are missing here - if you want to also restrict the AD data then also add in the search constraint (index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR (in... See more...
I am not sure what you are missing here - if you want to also restrict the AD data then also add in the search constraint (index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR (index="activedirectory" AND sourcetype="ActiveDirectory" AND sAMAccountName=* AND OU="Test Users") | eval Account_Name = lower( coalesce( Account_Name, sAMAccountName)) | search Account_Name="test-user" There is nothing wrong with your logic as such, so at this point you will have a data stream contains two types of event - what are you now looking to do with this? I expect you are wanting to combine these data sets according to Account_Name, so you would typically do | stats values(*) as * by Account_Name but before doing that type of wildcard stats, limit the fields to what you want with a fields statement before it, i.e. | fields Account_Name a b c x y z
There are a number of ways to do this - the example below uses makeresults to create your example data  Simple way 1 - use eventstats to collect all networks for each server and then check if the re... See more...
There are a number of ways to do this - the example below uses makeresults to create your example data  Simple way 1 - use eventstats to collect all networks for each server and then check if the results contain fw-network-X where X is the network the server is on | makeresults format=csv data="server,network,firewall server-1,network-1,yes server-1,fw-network-1,yes server-2,network-2,no server-3,network-1,yes server-3,fw-network-1,yes server-4,network-2,no server-5,network-3,yes server-5,fw-network-3,yes" | fields - firewall ``` Above creates your example table ``` | eventstats values(network) as nws by server | eval firewall=if(nws="fw-".network OR match(network,"^fw-"), "yes", "no") | fields - nws | table server network firewall Depending on the subleties of your data, you may need to tweak the eval firewall statement  
Your search is written in a very strange way for Splunk SPL - so it makes it hard to understand what your data looks like and what you are actually trying to get to. Based on your posted search, thi... See more...
Your search is written in a very strange way for Splunk SPL - so it makes it hard to understand what your data looks like and what you are actually trying to get to. Based on your posted search, this is a more efficient replacement - try this search and see if this comes up with the same output as your basic search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) (repoter.dataloadingintiated) OR (task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data") OR ("app.mefwebdata - jobintiated") | eval host=if(match(_raw, "(?i)app\.mefwebdata - jobintiated"), case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) + " - " + host_ip , null()) | eval FilesofDMA=if(match(_raw, "task\.dataloadedfromfiles"), 1, 0) | stats values(host) as "Host Data Details" values(Error) as Error values(local) as "Files created localley on AMP" sum(FilesofDMA) as "File sent to DMA" | appendpipe [ stats dc("Host Data Details") as count | eval Error="Job didn't run today" | where count==0 | table Error]  
So if a field is not “Cim compliant” doest that mean that it cannot be used in tstats?
Hello, We're having trial of Splunk Observability Cloud Service.   We tried to deploy the integration guided example (the Hipster Shop app).     Data graph can be seen in APM and Infrastructure, but... See more...
Hello, We're having trial of Splunk Observability Cloud Service.   We tried to deploy the integration guided example (the Hipster Shop app).     Data graph can be seen in APM and Infrastructure, but got error in all RUM dashboards: request to http://rum-api-service.o11y-rum/api/rum/v3/node-metrics failed, reason: getaddrinfo ENOTFOUND rum-api-service.o11y-rum   I’m afraid if I defined those RUM related environment variables incorrectly during the deployment: RUM_REALM=jp0 RUM_AUTH=<RUM token> RUM_APP_NAME=Hipster_Shop                                               ß arbitrary RUM_ENVIRONMENT=Hipster_Shop_Jump_Start              ß arbitrary   As we haven't bought the service yet, can't submit support ticket to Splunk support... Would anyone please help? Thanks and Regards  
In fact, from this document "https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Consolidatedatafrommultiplehosts", I did not find that the second step needs to be executed.