All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to ... See more...
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to find any CDR data.  We are using Enterprise on-prem.
Thanks @scelikok  Its working  I am using  coalesce if the PRD success as success message if the error i want to show error msg instead of PRD Error message .So tried like below its not working  ... See more...
Thanks @scelikok  Its working  I am using  coalesce if the PRD success as success message if the error i want to show error msg instead of PRD Error message .So tried like below its not working  | eval output=mvfilter(match(message,"^PRD")) | eval Response= coalesce(error,errorMessage,output)
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format... See more...
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format: | Date | File RPWARDA | Count of File SPWARAA |  Count of File SPWARAA | Count of File SPWARRA | Diff (RPWARDA   - ( SPWARAA +SPWARRA ) ) | |2024/04/10 | 49 | 38 | 5 | 6 |   Is it possible using a splunk query ?    Original query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | eval DIR = if(file="RPWARDA" ,"IN","OUT") | convert timeformat="%Y/%m/%d" ctime(_time) AS Date | stats count by Date , file , DIR  
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-... See more...
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error linuxadmin@linuxvm:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error Can some one please help here .
How Splunk admin give access for a service account AB-CDRWYVH-L. Access for-  Splunk API read write access
Hi @NReddy12, I never experienced this behavior on a Linux server. The only hint is to open a case to Splunk Support, sending them a diag of your Universal Forwarder. Ciao. Giuseppe
Hi @phanikumarcs  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Do that then!
If your netmask is fixed, you can use the ipmask function   | eval result=ipmask("255.255.255.0", IP)  
Indeed, the objective is to utilize a lookup operation to match 'G01462' and find either 'G01462 - QA' or 'G01462 - SIT', or both. Alternatively, can I modify the lookup operation to precisely m... See more...
Indeed, the objective is to utilize a lookup operation to match 'G01462' and find either 'G01462 - QA' or 'G01462 - SIT', or both. Alternatively, can I modify the lookup operation to precisely match the "newResource" field with the "Resource" field to retrieve the corresponding values of the "environment" field in the Application environment appOwner newResource Caliber Dicore - TCG foo@gmail.com Dicore-automat Keygroup G01462 - QA goo@gmail.com Dicore-automat Keygroup G01462 - SIT boo@gmail.com G01462-mgmt-foo
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filename... See more...
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filenames. We observed too much CPU/Memory consumption by splunkd process when the input log files to be monitored are more ( > 1000K approx). All the input data logs files are new and total no. of events range would be 10 to 300. Few metirc logs: {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=batchreader1, current_queue_size=0, max_queue_size=0, files_queued=0, new_files_queued=0","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"} {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=tailreader1, current_queue_size=1388185, max_queue_size=1409382, files_queued=18388, new_files_queued=0, fd_cache_size=63","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"}   Please help me if there is any configuration tuning to limit the number of files to be monitored.
In your example, G01462 doesn't (completely) match any entry in either Resource or environment. Lookup requires an exact match (unless you define it as a wildcard lookup or CIDR). In the case of G014... See more...
In your example, G01462 doesn't (completely) match any entry in either Resource or environment. Lookup requires an exact match (unless you define it as a wildcard lookup or CIDR). In the case of G01462-mgmt-foo, would you want the lookup to find either G01462 - QA or  G01462 - SIT or both?
did you find a solution ? also seem to be experiencing the same problem  in my case the username and password are the ones i used when installing splunk enterprise after accepting licence agreement. ... See more...
did you find a solution ? also seem to be experiencing the same problem  in my case the username and password are the ones i used when installing splunk enterprise after accepting licence agreement. anyone?
Try something like this (?m)^.*field1 \=\s*(?<log1>\S*?)\s*\n (?m)^.*field2 \=\s*(?<log2>\S*?)\s*\n (?m)^.*field3 \=\s*(?<log3>\S*?)\s*\n
Done thank you @ITWhisperer 
Hi @ITWhisperer  @gcusello @ITWhisperer  please help This is the other issue which is related to csv dataset and lookup dataset. From this SPL: source="cmkcsv.csv" host="DESKTOP" index="cmk" sou... See more...
Hi @ITWhisperer  @gcusello @ITWhisperer  please help This is the other issue which is related to csv dataset and lookup dataset. From this SPL: source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" Getting output below Subscription  Resource  Key Vault  Secret  Expiration Date  Months BoB-foo  Dicore-automat  Dicore-automat-keycore Di core-tuubsp1sct  2022-07-28 -21 BoB-foo  Dicore-automat  Dicore-automat-keycore  Dicore-stor1scrt  2022-07-28 -21 BoB-foo  G01462-mgmt-foo  G86413-vaultcore  G86413-secret-foo   From this lookup: | inputlookup cmklookup.csv Getting output below Application environment appOwner Caliber Dicore - TCG foo@gmail.com Keygroup G01462 - QA goo@gmail.com Keygroup G01462 - SIT boo@gmail.com   Combine the two queries into one, where the output will only display results where the 'environment' and 'Resource' fields match. For instance, if 'G01462' matches in both fields across both datasets, it should be included in the output. How i can do this, could anyone help here to write spl. I wrote some of the Spls but it's not working for me. source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" |join type=inner [ | inputlookup cmklookup.csv environment] source="cmkcsv.csv" host="DESKTOP" index="cmk" sourcetype="cmkcsv" | lookup cmklookup.csv environment AS "Resource" OUTPUT "environment"
If it is a new / different issue, please raise it as a new question, that way the solved one can stay solved and people can look to help with the unsolved one.
Or is there an option to tell Splunk to insert a separator between the events and not write them directly together?
Hi, We've just upgraded to to 9.2.0 which comes with a UI overhaul as detailed here. We previously had a default home dashboard set as a welcome/landing page for new users. With this new UI th... See more...
Hi, We've just upgraded to to 9.2.0 which comes with a UI overhaul as detailed here. We previously had a default home dashboard set as a welcome/landing page for new users. With this new UI the 'Quick Links' appear as default and you need to click on 'Dashboard' at the top to view the default dashboard. This isn't ideal as we want all users to see the default dashboard on login. Does anyone know any way we can change this? I don't want to set a different default app as having the apps list on the side bar is key. Thanks
Hello everyone I want to calculate the network address from an IP and a mask: IP = 192.168.1.10 Mask = 255.255.255.0 Desired result = 192.168.1.0 Unfortunately I can't find a function or method ... See more...
Hello everyone I want to calculate the network address from an IP and a mask: IP = 192.168.1.10 Mask = 255.255.255.0 Desired result = 192.168.1.0 Unfortunately I can't find a function or method to do this. I looked for the 'cidrmatch' function but it only seems to return a boolean. Is there another way? Thanks for your help!