All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @yuanliu i understand it now, im able to get the id for all the knowledge objects owned by the user now However im still not able to change the owner for the knowledge object via the rest com... See more...
Thanks @yuanliu i understand it now, im able to get the id for all the knowledge objects owned by the user now However im still not able to change the owner for the knowledge object via the rest command, i get the following error " <msg type="ERROR">You do not have permission to share objects at the system level</msg> </messages> " My user account has the sc_admin role so permission should not be an issue, am i missing something ? Any help is really appreciated
Hi, I'm new to Splunk, so I apologize if this question seems naive. While experimenting with calculated fields, I found some inconsistent results. Consequently, I removed these fields and tested dir... See more...
Hi, I'm new to Splunk, so I apologize if this question seems naive. While experimenting with calculated fields, I found some inconsistent results. Consequently, I removed these fields and tested directly in the search. I'm aware that the syntax I'm using here with eval is not the one specified in the documentation, but I'm using it to simulate the calculated field (and it yields the same results). I've seen this use of eval elsewhere but only for very simple things. When I run: stats sum(eval((bytes/(1024*1024)))) as MB , it works. However, when I run stats sum(eval(round(bytes/(1024*1024),2))) as MB I get results, but they are totally inconsistent. What could be happening? Where is my mistake? (Note that I'm not looking for the correct solution - I already have it - but I want to understand why this syntax doesn't work.) Thanks.
Hi ,   I am trying to achieve an automation whereon i will be running a query and then passing the IP's  which i need to send to akamai via POST API. I know, edgegridauth library can be used to ach... See more...
Hi ,   I am trying to achieve an automation whereon i will be running a query and then passing the IP's  which i need to send to akamai via POST API. I know, edgegridauth library can be used to achieve the same but got stuck on how the action would be configured . Can someone help.    
I am getting the Duplicate events in Splunk from Aws cloud watch and I am sending data from only one source to the Splunk . How do I resolve it.
Let me first point out that you can only determine if a group of pods as denoted in pod_name_lookup is completely absent (missing), not any individual pod_name.  As such, your "timechart" can only ha... See more...
Let me first point out that you can only determine if a group of pods as denoted in pod_name_lookup is completely absent (missing), not any individual pod_name.  As such, your "timechart" can only have values 1 and 0 for each missing pod_name_lookup.  Second, I want to note that calculations to fill null importance values is irrelevant to the problem in hand, therefore I will ignore them. The way to think through a solution is as follows: You want to populate a field that contains all non-critical pod_name_lookup values in every event so you can compare with running ones in each time interval. (Hint: eventstats.)  In other words, if you have these pods _time pod_name sourcetype 2024-05-08 01:42:10 apache-12 kubectl 2024-05-08 01:41:58 apache-2 kubectl 2024-05-08 01:41:46 kakfa-8 kubectl 2024-05-08 01:41:00 apache-13 kubectl 2024-05-08 01:40:52 someapp-6 kubectl 2024-05-08 01:39:40 grafana-backup-11 kubectl 2024-05-08 01:39:34 apache-4 kubectl 2024-05-08 01:39:32 kafka-6 kubectl 2024-05-08 01:39:26 someapp-2 kubectl 2024-05-08 01:38:16 apache-12 kubectl 2024-05-08 01:38:10 grafana-backup-6 kubectl and pod_list lookup contains the following importance namespace pod_name_lookup critical ns1 kafka-* critical ns1 apache-* non-critical ns2 grafana-backup-* non-critical ns2 someapp-* (As you can see, I added "someapp-*" because in your illustration, only one app is "non-critical".  This makes data nontrivial.) You will want to produce an intermediate table like this (please ignore the time interval differences just focus on material fields): _time pod_name_lookup pod_name_all 2024-05-08 01:35:00     2024-05-08 01:36:00 apache-* grafana-backup-* grafana-backup-* someapp-* 2024-05-08 01:37:00 kafka-* someapp-* grafana-backup-* someapp-* 2024-05-08 01:38:00 apache-* grafana-backup-* grafana-backup-* someapp-* 2024-05-08 01:39:00 apache-* someapp-* grafana-backup-* someapp-* 2024-05-08 01:40:00 apache-* kakfa-* grafana-backup-* someapp-* (This illustration assumes that you are looking for missing pods in each calendar minute; I know this is ridiculous, but it is easier to emulate.)  From this table, you can calculate which value(s) in pod_name_all is/are missing from pod_name_lookup. (Hint: mvmap can be an easy method.) In SPL, this thought process can be implemented as   index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | dedup pod_name | append [inputlookup pod_list where importance = non-critical | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all | where sourcetype == "kubectl" | timechart span=1h@h values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all | eval missing = mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all))) | where isnotnull(missing) | timechart span=1h@h count by missing   In the above, I changed time bucket to 1h@h  (as opposed to 1m@m used in illustrations).  You need to change that to whatever suits your needs. Here is an emulation used to produce the above tables and this chart:   | makeresults format=csv data="_time, pod_name 10,apache-12 22,apache-2 34,kakfa-8 80,apache-13 88,someapp-6 160,grafana-backup-11 166,apache-4 168,kafka-6 174,someapp-2 244,apache-12 250,grafana-backup-6" | eval _time = now() - _time | eval sourcetype = "kubectl", importance = "non-critical" | eval pod_name_lookup = replace(pod_name, "\d+", "*") ``` the above emulates index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | dedup pod_name ``` | append [makeresults format=csv data="namespace, pod_name_lookup, importance ns1, kafka-*, critical ns1, apache-*, critical ns2, grafana-backup-*, non-critical ns2, someapp-*, non-critical" | where importance = "non-critical" ``` subsearch thus far emulates | inputlookup pod_list where importance = non-critical ``` | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all | where sourcetype == "kubectl" | timechart span=1m@m values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all | eval missing = mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all))) | where isnotnull(missing) | timechart span=1m@m count by missing    
Hello, thanks for replying, checked the permission and disabled AV, still the same outcome. Any other ideas? Best regards Alex
Hi, yes I was able to get past this issue,   I edited the JDBC URL and added below additional KV pairs jdbc:sqlserver://IP:Port;databaseName=dbname;selectMethod=cursor;encrypt=false;trustServerCer... See more...
Hi, yes I was able to get past this issue,   I edited the JDBC URL and added below additional KV pairs jdbc:sqlserver://IP:Port;databaseName=dbname;selectMethod=cursor;encrypt=false;trustServerCertificate=true   Hope this helps
What you can search for depends on your data. If you have properly onboarded data, you should have your events ingested with a well-defined sourcetype and have your fields extracted. Otherwise Splunk... See more...
What you can search for depends on your data. If you have properly onboarded data, you should have your events ingested with a well-defined sourcetype and have your fields extracted. Otherwise Splunk might simply not know what you mean by "src_addr" or "dest_addr". Even better if you have your data CIM-compliant - then you can search from datamodel using just standardized fields regardless of the actual fields contained within the original raw event. But that's a bit more advanced topic. The first thing would be to verify what fields you actually have available. Try running index=firewall host=your_firewall | head 10 in verbose mode and expand a single event to see what fields are extracted. If your fields are called - for example - src_ip and dest_ip, searching for src_addr and dest_addr will yield no results because Splunk doesn't know those fields.
| eval row=mvrangee(0,count) | mvexpand row | fields - row | eval count=1
Try to post code snippets in either a preformatted paragraph or a code block - it helps reability. But to the point - the BREAK_ONLY_BEFORE setting is only applied when SHOULD_LINEMERGE is set to tr... See more...
Try to post code snippets in either a preformatted paragraph or a code block - it helps reability. But to the point - the BREAK_ONLY_BEFORE setting is only applied when SHOULD_LINEMERGE is set to true (which generally should be avoided whenever possible). To split your input into events containing both the timestamp and the command you'd need to adjust your LINE_BREAKER to not just treat every line as separate event but to break the input stream at new lines followed immediately by a hash and a timestamp. It would probably be something like LINE_BREAKER=([\r\n]+)#\d+  
I added $export HISTTIMEFORMAT='%F %T' to /root/.bashrc instead of /etc/profile.d to test the HISTTIMEFORMAT setting. 1. However, in Splunk, timestamps and command are recognized as different even... See more...
I added $export HISTTIMEFORMAT='%F %T' to /root/.bashrc instead of /etc/profile.d to test the HISTTIMEFORMAT setting. 1. However, in Splunk, timestamps and command are recognized as different events and searched. rm -rf local/ -> event 1 #1714721901 -> event 2 cd /opt/splunkforwarder/etc/apps/ -> event 3 #1714721771 -> event 4 2. For the timestamps test, I added a setting to another Splunk's props.conf  that works well. [test_bash_history] BREAK_ONLY_BEFORE = #(?=\d+) MAX_TIMESTAMP_LOOKAHEAD = 11 TIME_PREFIX = # TIME_FORMAT = #%s Is this setting correct?
According to the documentation for search time modifiers you should be correct. Although example 4 and 5 on that page uses a different time format. Try the format from the examples.
Sourcetype  is important because it categorises the raw data and should extract / parse the data into fields.  From the screen shot it looks like your data is not being parsed/extracted on the SH.  ... See more...
Sourcetype  is important because it categorises the raw data and should extract / parse the data into fields.  From the screen shot it looks like your data is not being parsed/extracted on the SH.  1. You most likely do not have the correct sourcetype or TA installed for you TA.     2. Obviously this is firewall data (I have never heard of a sourcetype firewall, but it could be a custom name, normally its called or set with a meaningful name like cisco:asa   etc.  Run this command and see if it returns any sourcetypes, if it still doesn't, identify the vendor of the firewall logs, find the TA in Splunk base, look at how you are ingesting this data, inputs and check and note the metadata settings, use the sourcetype from there. If not you will have to develop a custom one for this data source. | tstats count where index=firewall BY sourcetype, index | stats values(sourcetype) BY index .     
Hi, you can easily see the usage data in the Management Console. There are searches behind the panels (little search icon in the lower right of each panel). The data for the searches is in your Splu... See more...
Hi, you can easily see the usage data in the Management Console. There are searches behind the panels (little search icon in the lower right of each panel). The data for the searches is in your Splunk Environment. You can now take and modify those searches to time bucket by hour and/or month. It would make sense to save this search as a report, let the report run regularly (scheduled) and use a summary index for the results. After that you can use the REST API on your Splunk SearchHead to trigger a search against the summarised data and get the results as a csv or json.  I can't give you a specific search cause that depends on your license model and Splunk version. But the panels in the Management Console Dashboards give you the searches you need to start from.
The timechart command neeeeeeeeeeeeeds a _time field for the time bucketing. With your stats command you have not done a "by _time" and with that ignored / "eliminated" the _time field from the resul... See more...
The timechart command neeeeeeeeeeeeeds a _time field for the time bucketing. With your stats command you have not done a "by _time" and with that ignored / "eliminated" the _time field from the results. That means the field is no longer available for any command after the stats line. You would have to at least add the _time field to the by clause of the stats command. With that said... I think your "append" with the inputlookup would create results where there is no "_time" field at the end of the resultset. So it will be funny to see what stats does with those results. My recommendation is to either try to just do the lookup without the append or eval some _time field with a value that makes sense to the inputlookup append. 
First, thanks for posting data in text.  Second, it's a huge risk posting text data without code box.  See how many smily faces you sprinkled all over.  Let me clean up for you here:     event": "... See more...
First, thanks for posting data in text.  Second, it's a huge risk posting text data without code box.  See how many smily faces you sprinkled all over.  Let me clean up for you here:     event": "{\"eventVersion\":\"1.08\",\"userIdentity\":{\"type\":\"AssumedRole\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ:redlock\",\"arn\":\"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\",\"accountId\":\"533267265705\",\"accessKeyId\":\"ASIAXYKJUXCUSTP25SUE\",\"sessionContext\":{\"sessionIssuer\":{\"type\":\"Role\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ\",\"arn\":\"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\",\"accountId\":\"533267265705\",\"userName\":\"PrismaCloudRole-804603675133320192\"},\"webIdFederationData\":{},\"attributes\":{\"creationDate\":\"2024-05-03T00:53:45Z\",\"mfaAuthenticated\":\"false\"}}},\"eventTime\":\"2024-05-03T04:09:07Z\",\"eventSource\":\"autoscaling.amazonaws.com\",\"eventName\":\"DescribeScalingPolicies\",\"awsRegion\":\"us-west-2\",\"sourceIPAddress\":\"13.52.105.217\",\"userAgent\":\"Vert.x-WebClient/4.4.6\",\"requestParameters\":{\"maxResults\":10,\"serviceNamespace\":\"cassandra\"},\"responseElements\":null,\"additionalEventData\":{\"service\":\"application-autoscaling\"},\"requestID\":\"ef12925d-0e9a-4913-8da5-1022cfd15964\",\"eventID\":\"a1799eeb-1323-46b6-a964-efd9b2c30a8a\",\"readOnly\":true,\"eventType\":\"AwsApiCall\",\"managementEvent\":true,\"recipientAccountId\":\"533267265705\",\"eventCategory\":\"Management\",\"tlsDetails\":{\"tlsVersion\":\"TLSv1.3\",\"cipherSuite\":\"TLS_AES_128_GCM_SHA256\",\"clientProvidedHostHeader\":\"application-autoscaling.us-west-2.amazonaws.com\"}}"}     Third, and this is key.  Are you sure that's the true form of a complete event?  For one thing, it seems that there is a missing opening curly bracket ({) and a missing double quotation mark (") before the entire snippet.   If I am correct that you just forget to include the opening bracket and opening question mark, i.e., your real events look like     {"event": "{\"eventVersion\":\"1.08\",\"userIdentity\":{\"type\":\"AssumedRole\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ:redlock\",\"arn\":\"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\",\"accountId\":\"533267265705\",\"accessKeyId\":\"ASIAXYKJUXCUSTP25SUE\",\"sessionContext\":{\"sessionIssuer\":{\"type\":\"Role\",\"principalId\":\"AROAXYKJUXCU7M4FXD7ZZ\",\"arn\":\"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\",\"accountId\":\"533267265705\",\"userName\":\"PrismaCloudRole-804603675133320192\"},\"webIdFederationData\":{},\"attributes\":{\"creationDate\":\"2024-05-03T00:53:45Z\",\"mfaAuthenticated\":\"false\"}}},\"eventTime\":\"2024-05-03T04:09:07Z\",\"eventSource\":\"autoscaling.amazonaws.com\",\"eventName\":\"DescribeScalingPolicies\",\"awsRegion\":\"us-west-2\",\"sourceIPAddress\":\"13.52.105.217\",\"userAgent\":\"Vert.x-WebClient/4.4.6\",\"requestParameters\":{\"maxResults\":10,\"serviceNamespace\":\"cassandra\"},\"responseElements\":null,\"additionalEventData\":{\"service\":\"application-autoscaling\"},\"requestID\":\"ef12925d-0e9a-4913-8da5-1022cfd15964\",\"eventID\":\"a1799eeb-1323-46b6-a964-efd9b2c30a8a\",\"readOnly\":true,\"eventType\":\"AwsApiCall\",\"managementEvent\":true,\"recipientAccountId\":\"533267265705\",\"eventCategory\":\"Management\",\"tlsDetails\":{\"tlsVersion\":\"TLSv1.3\",\"cipherSuite\":\"TLS_AES_128_GCM_SHA256\",\"clientProvidedHostHeader\":\"application-autoscaling.us-west-2.amazonaws.com\"}}"}     you would have gotten a field "event" containing the following value     {"eventVersion":"1.08","userIdentity":{"type":"AssumedRole","principalId":"AROAXYKJUXCU7M4FXD7ZZ:redlock","arn":"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock","accountId":"533267265705","accessKeyId":"ASIAXYKJUXCUSTP25SUE","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAXYKJUXCU7M4FXD7ZZ","arn":"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192","accountId":"533267265705","userName":"PrismaCloudRole-804603675133320192"},"webIdFederationData":{},"attributes":{"creationDate":"2024-05-03T00:53:45Z","mfaAuthenticated":"false"}}},"eventTime":"2024-05-03T04:09:07Z","eventSource":"autoscaling.amazonaws.com","eventName":"DescribeScalingPolicies","awsRegion":"us-west-2","sourceIPAddress":"13.52.105.217","userAgent":"Vert.x-WebClient/4.4.6","requestParameters":{"maxResults":10,"serviceNamespace":"cassandra"},"responseElements":null,"additionalEventData":{"service":"application-autoscaling"},"requestID":"ef12925d-0e9a-4913-8da5-1022cfd15964","eventID":"a1799eeb-1323-46b6-a964-efd9b2c30a8a","readOnly":true,"eventType":"AwsApiCall","managementEvent":true,"recipientAccountId":"533267265705","eventCategory":"Management","tlsDetails":{"tlsVersion":"TLSv1.3","cipherSuite":"TLS_AES_128_GCM_SHA256","clientProvidedHostHeader":"application-autoscaling.us-west-2.amazonaws.com"}}     (By the way, event should be available whether or not you have KV_MODE=json, whether or not you have index_extraction=JSON.)  As you can see, this value is a compliant JSON.  All you need to do is to feed this field to spath.     | spath input=event     This way, if my speculation about missing bracket and quotation mark is correct, the sample you posted should give the following fields and values field name field value additionalEventData.service application-autoscaling awsRegion us-west-2 eventCategory Management eventID a1799eeb-1323-46b6-a964-efd9b2c30a8a eventName DescribeScalingPolicies eventSource autoscaling.amazonaws.com eventTime 2024-05-03T04:09:07Z eventType AwsApiCall eventVersion 1.08 managementEvent true readOnly true recipientAccountId 533267265705 requestID ef12925d-0e9a-4913-8da5-1022cfd15964 requestParameters.maxResults 10 requestParameters.serviceNamespace cassandra responseElements null sourceIPAddress 13.52.105.217 tlsDetails.cipherSuite TLS_AES_128_GCM_SHA256 tlsDetails.clientProvidedHostHeader application-autoscaling.us-west-2.amazonaws.com tlsDetails.tlsVersion TLSv1.3 userAgent Vert.x-WebClient/4.4.6 userIdentity.accessKeyId ASIAXYKJUXCUSTP25SUE userIdentity.accountId 533267265705 userIdentity.arn arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock userIdentity.principalId AROAXYKJUXCU7M4FXD7ZZ:redlock userIdentity.sessionContext.attributes.creationDate 2024-05-03T00:53:45Z userIdentity.sessionContext.attributes.mfaAuthenticated false userIdentity.sessionContext.sessionIssuer.accountId 533267265705 userIdentity.sessionContext.sessionIssuer.arn arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192 userIdentity.sessionContext.sessionIssuer.principalId AROAXYKJUXCU7M4FXD7ZZ userIdentity.sessionContext.sessionIssuer.type Role userIdentity.sessionContext.sessionIssuer.userName PrismaCloudRole-804603675133320192 userIdentity.type AssumedRole However, if your raw events truly miss the opening bracket and opening quotation mark, you need to examine your ingestion process and fix that.  No developer will knowingly omit those.   Temporarily, you can use SPL to "fix" the omission and extract data, like     | eval _raw = "{\"" . _raw | spath | spath input=event     But this is not a real solution.  Bad ingestion can do many other damage. Lastly, here is an emulation you can play with an compare with real data     | makeresults | eval _raw = "{\"event\": \"{\\\"eventVersion\\\":\\\"1.08\\\",\\\"userIdentity\\\":{\\\"type\\\":\\\"AssumedRole\\\",\\\"principalId\\\":\\\"AROAXYKJUXCU7M4FXD7ZZ:redlock\\\",\\\"arn\\\":\\\"arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\\\",\\\"accountId\\\":\\\"533267265705\\\",\\\"accessKeyId\\\":\\\"ASIAXYKJUXCUSTP25SUE\\\",\\\"sessionContext\\\":{\\\"sessionIssuer\\\":{\\\"type\\\":\\\"Role\\\",\\\"principalId\\\":\\\"AROAXYKJUXCU7M4FXD7ZZ\\\",\\\"arn\\\":\\\"arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\\\",\\\"accountId\\\":\\\"533267265705\\\",\\\"userName\\\":\\\"PrismaCloudRole-804603675133320192\\\"},\\\"webIdFederationData\\\":{},\\\"attributes\\\":{\\\"creationDate\\\":\\\"2024-05-03T00:53:45Z\\\",\\\"mfaAuthenticated\\\":\\\"false\\\"}}},\\\"eventTime\\\":\\\"2024-05-03T04:09:07Z\\\",\\\"eventSource\\\":\\\"autoscaling.amazonaws.com\\\",\\\"eventName\\\":\\\"DescribeScalingPolicies\\\",\\\"awsRegion\\\":\\\"us-west-2\\\",\\\"sourceIPAddress\\\":\\\"13.52.105.217\\\",\\\"userAgent\\\":\\\"Vert.x-WebClient/4.4.6\\\",\\\"requestParameters\\\":{\\\"maxResults\\\":10,\\\"serviceNamespace\\\":\\\"cassandra\\\"},\\\"responseElements\\\":null,\\\"additionalEventData\\\":{\\\"service\\\":\\\"application-autoscaling\\\"},\\\"requestID\\\":\\\"ef12925d-0e9a-4913-8da5-1022cfd15964\\\",\\\"eventID\\\":\\\"a1799eeb-1323-46b6-a964-efd9b2c30a8a\\\",\\\"readOnly\\\":true,\\\"eventType\\\":\\\"AwsApiCall\\\",\\\"managementEvent\\\":true,\\\"recipientAccountId\\\":\\\"533267265705\\\",\\\"eventCategory\\\":\\\"Management\\\",\\\"tlsDetails\\\":{\\\"tlsVersion\\\":\\\"TLSv1.3\\\",\\\"cipherSuite\\\":\\\"TLS_AES_128_GCM_SHA256\\\",\\\"clientProvidedHostHeader\\\":\\\"application-autoscaling.us-west-2.amazonaws.com\\\"}}\"}" | spath ``` data emulation above ``` | spath input=event      
@richgalloway  I have already tried using this if you see my posted questions , there i have already mentioned that filters parameter f , is not working . here is the screenshot if what i tried... See more...
@richgalloway  I have already tried using this if you see my posted questions , there i have already mentioned that filters parameter f , is not working . here is the screenshot if what i tried       
Hello, I was playing with Network Explorer feature and it looks only bandwidht metric is available on a Network Map. On the video which I found on youtube, there is a panel available where metrics c... See more...
Hello, I was playing with Network Explorer feature and it looks only bandwidht metric is available on a Network Map. On the video which I found on youtube, there is a panel available where metrics can be changed (color by...). How to enable that? Is it still available in this feature? I'd like to see either latency or packet loss instead of bandwidth. https://www.splunk.com/en_us/resources/videos/network-explorer-overview.html?locale=en_us Thanks!
Hello I have lookup file which have content like this name                   count                          time abc                          3                               04-24 cdf         ... See more...
Hello I have lookup file which have content like this name                   count                          time abc                          3                               04-24 cdf                           2                                 04-24 but i want the content of  the lookup file to be like this name                 count                   time abc                            1                       04-24 abc                           1                        04-24 abc                           1                        04-24 cdf                            1                       04-24 cdf                            1                        04-24 how will i able to do this?
  Note : this query is not for the billing ingestion using splunk add-ons' and ingestion   Splunk Observability Cloud counts the number of metric time series (MTS) sent during each hour in the mon... See more...
  Note : this query is not for the billing ingestion using splunk add-ons' and ingestion   Splunk Observability Cloud counts the number of metric time series (MTS) sent during each hour in the month how can I acess any of the billing data through api both hourly and monthly  lhttps://docs.splunk.com/observability/en/admin/subscription-usage/imm-billing.html