All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It seems like you need to request your AD team to provide you access to the AD group which governs the authentication to your HF. Then you will be able to login. No need to change anything from the b... See more...
It seems like you need to request your AD team to provide you access to the AD group which governs the authentication to your HF. Then you will be able to login. No need to change anything from the backend.
Where about in the dashboard source code would this go? I have attempted to add it at the top after "<description>Test</description>" but it doesn't seem to have any affect.
Try the eventstats command. | eventstats count as result by name  
We would like to create a dashboard with a table showing the top 10 MQ queues based on their current queue length. This is based on the MQ extension which delivers the custom metrics as expected. 1... See more...
We would like to create a dashboard with a table showing the top 10 MQ queues based on their current queue length. This is based on the MQ extension which delivers the custom metrics as expected. 1. with Dashboard & Reports, there is no table widget available 2. with an Analytics Dashboard, it seems that accessing (custom) metrics with ADQL is not possible. Any solution to this?
Hello All , Greetings   I am looking for perfect explanation of memk() function used with convert statement , how it works and where to pass the m,g,k (The letter k indicates kilobytes, m indic... See more...
Hello All , Greetings   I am looking for perfect explanation of memk() function used with convert statement , how it works and where to pass the m,g,k (The letter k indicates kilobytes, m indicates megabytes, and g indicates gigabytes) . when i am trying this function to convert kb to KB , i am not seeing any change in values . Please help  index=_internal source="*metric*" |convert memk(kb) as KB |table kb , KB     Thanks Manish Kumar      
Hey there, My guess is that you built a custom model in the jupyter lab environment where you also installed the keras package and imported the preprocessing functionality (from keras import preproc... See more...
Hey there, My guess is that you built a custom model in the jupyter lab environment where you also installed the keras package and imported the preprocessing functionality (from keras import preprocessing). You can probably fit and apply your model just fine inside the jupyter lab. The problem is, however, that once you want to fit and apply the model over in Splunk SPL, you get the error you described. Before you do anything else, make sure that the keras package is imported in the correct cell of your jupyter notebook. The package MUST be imported in the correct cell. If you started from the barebone_template.ipynb, this is the cell: You can check if you import packages in the correct cell by navigating to /app/model/your_notebook.py and check whether the keras package is imported. This is the file that will be used once you issue the | fit or | apply command over in Splunk. Here an example of how the .py file from the barebone_template.ipynb looks like:   If this did resolve your issue, great. If not, keep on reading. The docker container image you use, must have the keras library installed, otherwise the library is not available through the | fit and | apply command in Splunk SPL. Try resolving your issue by either ... ... using the pre-built 'Transformers CPU (5.1.1)' container image ... using the pre-built 'Transformers GPU (5.1.1)' container image ... build your own docker image as described here Make sure to update your DSDL app to the latest version in order to have these pre-built container images available. Let me know if I can help you any further.
Hi @vimselva , check in search dashboard if the regex is correct using the regex command: <your_search> | regex "NormalizedApiId failed to resolve" Ciao. Giuseppe
Hi, we are currently experiencing reliability issues when using the Microsoft Teams Add-on for Splunk  (https://splunkbase.splunk.com/app/4994 The renewal of the Azure Subscription, which sho... See more...
Hi, we are currently experiencing reliability issues when using the Microsoft Teams Add-on for Splunk  (https://splunkbase.splunk.com/app/4994 The renewal of the Azure Subscription, which should take place every 24h does not work sometimes and will not start again unless we create new inputs (subscription, webhook, call records). I did not find an error message regarding this in the logs. We build an alert for this problem.  We use the TA from a HF in the DMZ. So it is possible that we missed a FW-rule for one of Microsofts Graph IPs. The problem does not appear in regular intervals. Rarely the webhook will crash, requiring a restart of the Splunk process.  Has anyone experienced similar issue and has a solution to this problem?
I have tried to solve this problem with all the combinations, but missing some key thing on how to resolve. I have various logs coming with source pattern as /var/log/containers/*. I would like to d... See more...
I have tried to solve this problem with all the combinations, but missing some key thing on how to resolve. I have various logs coming with source pattern as /var/log/containers/*. I would like to drop the DEBUG logs and hence have the following in props.conf: [source://var/log/containers/*] TRANSFORMS-null = debug_to_null and in transforms.conf:   [debug_to_null] REGEX = DEBUG DEST_KEY = queue FORMAT = nullQueue After making the above change, as expected the logs with DEBUG keyword is getting dropped.   Now, I would also like to drop logs with another pattern for a particular source pattern under /var/log/containers, so I've updated my props.conf like this:   [source::/var/log/containers/*_integration-business*.log] TRANSFORMS-null = setnull [source://var/log/containers/*] TRANSFORMS-null = debug_to_null   and updated transforms.conf like this: [debug_to_null] REGEX = DEBUG DEST_KEY = queue FORMAT = nullQueue [setnull] REGEX = NormalizedApiId failed to resolve DEST_KEY = queue FORMAT = nullQueue   After making this change, I can see only logs with DEBUG keyword is getting dropped, however the logs with NormalizedApiId failed to resolve are still being ingested. I was hoping that logs with DEBUG keyword from all source paths with /var/log/containers/* pattern will be dropped and NormalizedApiId failed to resolve keyword from a particular source path with /var/log/containers/*_integration-business*.log pattern will be dropped. But seems not working that way. Please guide me on this.
I take a log using Python's print statement in lambda and save it in the cloud-watch log group. The log group is being collected in Splunk's splunk add-on for aws app. However, some logs are collec... See more...
I take a log using Python's print statement in lambda and save it in the cloud-watch log group. The log group is being collected in Splunk's splunk add-on for aws app. However, some logs are collected, but some are not. collected, 1. INIT_START Runtime Version: ~ 2. START RequestId: ~ 3. END RequestId: ~ 4. REPORT RequestId: ~ not collected, 1. Log I took with print statement Have you been through the same situation as me or have a solution to a similar situation?
Hi @kp_pl , yes, coalesce and if are the same, even if I always use coalesce. I usually use "." instead "+". let me summarize: for events from index A, you want to use to concatenated fields from ... See more...
Hi @kp_pl , yes, coalesce and if are the same, even if I always use coalesce. I usually use "." instead "+". let me summarize: for events from index A, you want to use to concatenated fields from this index, otherwise two concatenated fieds from index B, is it correct? in this case you could use: | eval key=coalesce(fieldA1."_".fieldA2, fieldB1."_".fieldB2) or in your way: | eval JOIN=if(index="AAA", fieldA1."_".fieldA2, fieldB1."_".fieldB2) let me know. Ciao. Giuseppe
I am regularly uploading a file over Splunk which I use for my reporting (might include events with various timestamp).  I don't want data retention policy to be bounded by _time since it might af... See more...
I am regularly uploading a file over Splunk which I use for my reporting (might include events with various timestamp).  I don't want data retention policy to be bounded by _time since it might affect the report I am creating.  I definitely want Splunk to delete older uploads based on indexed time to manage my server disk usage. Any suggestion will be appreciated!
everything you write is correct but it is not my case.  Below my indexes with keys: index1:  AAA key values:  fieldA1 AND fieldA2 index2 : BBB key values: fieldB1 AND fieldB2 so I suppose I ... See more...
everything you write is correct but it is not my case.  Below my indexes with keys: index1:  AAA key values:  fieldA1 AND fieldA2 index2 : BBB key values: fieldB1 AND fieldB2 so I suppose I need to do something like | eval JOIN=if (index='AAA', fieldA1+"_"+fieldA2, fieldB1+"_"+fieldB2) or in your way: | eval key=coalesce(fieldA1+"_"+fieldA2 , fieldB1+"_"+fieldB2) btw. In a few sources close to splunk I read IF is more efficient than COALESCE .   But of course both methods do more or less the same.
Hi @kp_pl , I understood that you have key1 in ndex1 and key2 in index2 and you want to correate events from both the indexes. using coalesce, you create a new field (caed key) that takes values fr... See more...
Hi @kp_pl , I understood that you have key1 in ndex1 and key2 in index2 and you want to correate events from both the indexes. using coalesce, you create a new field (caed key) that takes values from index1 (when present key1) or otherwise from index2 (key2). then you correlate values using stats and you have values from both the indexes. Ciao. Giuseppe
I suppose you do not understand my question .  I need a join two indexes by two fields , Something like | eval key=if(index="aaa", I1key1 , I2key1) | eval key2=if(index="aaa", I1key2 , I2key2... See more...
I suppose you do not understand my question .  I need a join two indexes by two fields , Something like | eval key=if(index="aaa", I1key1 , I2key1) | eval key2=if(index="aaa", I1key2 , I2key2) | stats values(*) as * by (key and key2)
I have the following csv file:     id,name,age,male 1,lily,10,girl 2,bob,12,boy 3,lucy,12,girl 4,duby,10,boy 5,bob,11,boy 6,bob,10,boy 7,lucy,11,girl     Now, I want to use splunk to count the ... See more...
I have the following csv file:     id,name,age,male 1,lily,10,girl 2,bob,12,boy 3,lucy,12,girl 4,duby,10,boy 5,bob,11,boy 6,bob,10,boy 7,lucy,11,girl     Now, I want to use splunk to count the number of times each name is repeated, and the result after counting should be as follows:     id,name,age,male,result 1,lily,10,girl,1 2,bob,12,boy,3 3,lucy,12,girl,2 4,duby,10,boy,1 5,bob,11,boy,3 6,bob,10,boy,3 7,lucy,11,girl,2       How can I use SPL to accomplish this task?  
Hi @kp_pl , yes, it's correct. I'd use coalesce instead if: index IN (db, app) | eval key=coaesce(processId,pid) | stats sum(rows) AS rown sum(cputime) AS cputime by key Ciao. Giuseppe
Hi Team, I have a dashboard with 7 panels I need an alert to monitor the dashboard and alert us if any one of the panel shows percentage is > 10 Is there a possibility to create alert with the das... See more...
Hi Team, I have a dashboard with 7 panels I need an alert to monitor the dashboard and alert us if any one of the panel shows percentage is > 10 Is there a possibility to create alert with the dashboard link?
index=db OR index=app | eval join=if(index="db",processId,pid) | stats sum(rows) sum(cputime) by join Above is simple example how to join two indexes. But how to join two indexes where the key ... See more...
index=db OR index=app | eval join=if(index="db",processId,pid) | stats sum(rows) sum(cputime) by join Above is simple example how to join two indexes. But how to join two indexes where the key value has two fields ? K.  
Hi @lucilleddajab , let me understand: you have problems to access Splunk or the OS? if Splunk, you can reset the admin password, but you said that you already have this password. If you don't hav... See more...
Hi @lucilleddajab , let me understand: you have problems to access Splunk or the OS? if Splunk, you can reset the admin password, but you said that you already have this password. If you don't have the OS password, you have to ask to yor network or systems administrators to reset this password. Ciao. Giuseppe