All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As I said, change the tableCellColourWithoutJS to be the id of your panel
OK we got this to work in the Classic dashboard however this drilldown does not work in the new dashboard studio.  What we use now is the below, with $row.id|n$ filling in the data from the field we... See more...
OK we got this to work in the Classic dashboard however this drilldown does not work in the new dashboard studio.  What we use now is the below, with $row.id|n$ filling in the data from the field we want. However in dashboard studio it does not execute $row.id|n$ instead it just thinks it's part of the URL and never inserts the ID#. { hxxps://mySplunkcloud/en-US/app/missioncontrol/mc_incident_review#/$row.id|n$ }  
1. I don't see why you calculate the IN/OUT parameter if you don't need this value in the end. 2. Assuming you don't need the DIR field, you can simply use xyseries to... put your values into a x/y ... See more...
1. I don't see why you calculate the IN/OUT parameter if you don't need this value in the end. 2. Assuming you don't need the DIR field, you can simply use xyseries to... put your values into a x/y table. | xyseries Date file count  Now you can just calculate your sum/difference of various files as you have them as separate fields. (you might have to fillnull with zero if you have blank spaces).
My additional two cents on that - This combined search is using a subsearch results from which are appended to the results of "main" initial search. You have to understand limitations of subsearches.... See more...
My additional two cents on that - This combined search is using a subsearch results from which are appended to the results of "main" initial search. You have to understand limitations of subsearches. They have limits for returned results (which you might not hit here as you're summarizing the data with stats so you'd be returning just a bunch of rows probably) and - more importantly - execution time. It's important because if your subsearch runs for too long it gets finalized silently which means that only values calculated so far are returned to the outer search and you have no indication whatsoever that the subsearch wasn't allowed to run to its natural end. So in the end you might get no results/incomplete results/wrong results and not be aware of it. Therefore it's advisable to: 1. Keep the searches short (meaning not searching through a lot of data) 2. If possible, use indexed fields (like with the tstats command) 3. If you have two searches which significantly differ in terms of number of results and execution time, use the small/short one as the appended/joined/whatever subsearch. So in case of this particular scenario I'd swap the initial raw data search with tstats to lower the probability of the whole search "running away". index=_internal host=splunk_shc source=*license_usage.log* type=Usage | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour | append [ | tstats count where index=* by host, index, sourcetype ] | stats values(count) as events_latest_hour values(usage_lastest_hour) as usage_lastest_hour by host, index, sourcetype | sort - events_latest_hour, usage_lastest_hour
Hi @ITWhisperer  Its working but if i want to use any multivalue filed in the table.So the result might be affect right.If there is any possible to hide values for particular table using css.
After discussing with AppDynamics support, we found these observations and solution. The reason you are not observing any database calls is that not a single Business Transaction has been detected. ... See more...
After discussing with AppDynamics support, we found these observations and solution. The reason you are not observing any database calls is that not a single Business Transaction has been detected. The agent only detects outgoing calls if they occur within the context of a Business Transaction (BT).   Next steps:  Please apply the below-mentioned node property: Name: find-entry-points Type: Boolean Value: True This property will enable a feature that will recognize all the potential entry points that can be created as custom match rules. For more information on how to set/change node property please check the reference doc: App Agent Node Properties Once you will change the property value, collect the Agent debug log files and share them with us. Then revert the change. Reference doc: How do I find and configure missing entry points? We found a couple of potential entry points that will allow you to monitor JDBC calls by enabling the    Next steps:  Please create BT POJO on the following class/method:   BT-1 Class: com.imi.timer.quartz.cronjobs.PauseCampaignMonitor Method: execute   BT-2 Class: com.imi.timer.quartz.cronjobs.StopCampaignMonitor Method: execute   BT-3 Class: com.imi.timer.quartz.cronjobs.AbandonedTargetsMonitor Method: execute   BT-4 Class: com.imi.imicampaign.security.cache.CustomerDetailsCacheManager Method: updateCustomerKeyDetailsCache   BT-5 Class: com.imi.timer.quartz.cronjobs.InstanceDetailsUpdateMonitor Method: execute   #Doc POJO BT https://docs.appdynamics.com/appd/24.x/24.3/en/application-monitoring/configure-instrumentation/transaction-detection-rules/custom-match-rules/java-business-transaction-detection/pojo-entry-points  
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The pro... See more...
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The problem is that I am indexing events with a _time of -5h and a 2h difference from the event time stamp. Here is an example:   Time in the Edge Processor: It should be noted that the rest of the events that I ingest through this server are arriving at the correct time.
Hi @Real_captain, You can try below; index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | co... See more...
Hi @Real_captain, You can try below; index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | convert timeformat="%Y/%m/%d" ctime(_time) AS Date | stats count(eval(file="RPWARDA")) AS RPWARDA, count(eval(file="SPWARAA")) AS SPWARAA, count(eval(file="SPWARRA")) AS SPWARRA by Date | eval Diff=(RPWARDA-(SPWARAA+SPWARRA))  
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to ... See more...
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to find any CDR data.  We are using Enterprise on-prem.
Thanks @scelikok  Its working  I am using  coalesce if the PRD success as success message if the error i want to show error msg instead of PRD Error message .So tried like below its not working  ... See more...
Thanks @scelikok  Its working  I am using  coalesce if the PRD success as success message if the error i want to show error msg instead of PRD Error message .So tried like below its not working  | eval output=mvfilter(match(message,"^PRD")) | eval Response= coalesce(error,errorMessage,output)
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format... See more...
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format: | Date | File RPWARDA | Count of File SPWARAA |  Count of File SPWARAA | Count of File SPWARRA | Diff (RPWARDA   - ( SPWARAA +SPWARRA ) ) | |2024/04/10 | 49 | 38 | 5 | 6 |   Is it possible using a splunk query ?    Original query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | eval DIR = if(file="RPWARDA" ,"IN","OUT") | convert timeformat="%Y/%m/%d" ctime(_time) AS Date | stats count by Date , file , DIR  
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-... See more...
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error linuxadmin@linuxvm:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error Can some one please help here .
How Splunk admin give access for a service account AB-CDRWYVH-L. Access for-  Splunk API read write access
Hi @NReddy12, I never experienced this behavior on a Linux server. The only hint is to open a case to Splunk Support, sending them a diag of your Universal Forwarder. Ciao. Giuseppe
Hi @phanikumarcs  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Do that then!
If your netmask is fixed, you can use the ipmask function   | eval result=ipmask("255.255.255.0", IP)  
Indeed, the objective is to utilize a lookup operation to match 'G01462' and find either 'G01462 - QA' or 'G01462 - SIT', or both. Alternatively, can I modify the lookup operation to precisely m... See more...
Indeed, the objective is to utilize a lookup operation to match 'G01462' and find either 'G01462 - QA' or 'G01462 - SIT', or both. Alternatively, can I modify the lookup operation to precisely match the "newResource" field with the "Resource" field to retrieve the corresponding values of the "environment" field in the Application environment appOwner newResource Caliber Dicore - TCG foo@gmail.com Dicore-automat Keygroup G01462 - QA goo@gmail.com Dicore-automat Keygroup G01462 - SIT boo@gmail.com G01462-mgmt-foo
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filename... See more...
I've installed Splunk Universal Forwarder 9.1.0 on a Linux server and configured batch mode for data log file monitoring. There are different types of logs which we monitoring with different filenames. We observed too much CPU/Memory consumption by splunkd process when the input log files to be monitored are more ( > 1000K approx). All the input data logs files are new and total no. of events range would be 10 to 300. Few metirc logs: {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=batchreader1, current_queue_size=0, max_queue_size=0, files_queued=0, new_files_queued=0","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"} {"level":"INFO","name":"splunk","msg":"group=tailingprocessor, ingest_pipe=1, name=tailreader1, current_queue_size=1388185, max_queue_size=1409382, files_queued=18388, new_files_queued=0, fd_cache_size=63","service_id":"infra/service/ok6qk4zudodbld4wcj2ha4x3fckpyfz2","time":"04-08-2024 20:33:20.890 +0000"}   Please help me if there is any configuration tuning to limit the number of files to be monitored.
In your example, G01462 doesn't (completely) match any entry in either Resource or environment. Lookup requires an exact match (unless you define it as a wildcard lookup or CIDR). In the case of G014... See more...
In your example, G01462 doesn't (completely) match any entry in either Resource or environment. Lookup requires an exact match (unless you define it as a wildcard lookup or CIDR). In the case of G01462-mgmt-foo, would you want the lookup to find either G01462 - QA or  G01462 - SIT or both?