All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @meetmshah  how do you add custom ES roles on Permissions page? In data/inputs/app_permissions_manager "Action is not available" "Current instance is running SHC" There is only ess_analyst a... See more...
Hello @meetmshah  how do you add custom ES roles on Permissions page? In data/inputs/app_permissions_manager "Action is not available" "Current instance is running SHC" There is only ess_analyst and ess_user Thanks.  
Still Searching for a solution and or root cause.  so far we just have to flag as known error and check in time to time to see if anyone else may have come up with a different idea.  but the files ar... See more...
Still Searching for a solution and or root cause.  so far we just have to flag as known error and check in time to time to see if anyone else may have come up with a different idea.  but the files are gone as the error states as if the process completed but it still tracking the file and attempts to remove it again? 
Thank you so much for this post!!! It solved my problem upgrading from3.9.to 3.16
@ITWhisperer  I asking for particular table column. Where to use id in it. In the below screenshot both the column has multi value field .If use the tableCellColourWithoutJS another column is affecti... See more...
@ITWhisperer  I asking for particular table column. Where to use id in it. In the below screenshot both the column has multi value field .If use the tableCellColourWithoutJS another column is affecting .So I want hide mv index for particular column
Sorry, but this is just not true. CIM add-on does not on its own provide extractions nor contain any additional dashboards or reports apart from a few directly connected with the CIM state. It provi... See more...
Sorry, but this is just not true. CIM add-on does not on its own provide extractions nor contain any additional dashboards or reports apart from a few directly connected with the CIM state. It provides a common (hence the name) standard to which the data should be normalized using add-ons specific for each separate types of source data.
As I said, change the tableCellColourWithoutJS to be the id of your panel
OK we got this to work in the Classic dashboard however this drilldown does not work in the new dashboard studio.  What we use now is the below, with $row.id|n$ filling in the data from the field we... See more...
OK we got this to work in the Classic dashboard however this drilldown does not work in the new dashboard studio.  What we use now is the below, with $row.id|n$ filling in the data from the field we want. However in dashboard studio it does not execute $row.id|n$ instead it just thinks it's part of the URL and never inserts the ID#. { hxxps://mySplunkcloud/en-US/app/missioncontrol/mc_incident_review#/$row.id|n$ }  
1. I don't see why you calculate the IN/OUT parameter if you don't need this value in the end. 2. Assuming you don't need the DIR field, you can simply use xyseries to... put your values into a x/y ... See more...
1. I don't see why you calculate the IN/OUT parameter if you don't need this value in the end. 2. Assuming you don't need the DIR field, you can simply use xyseries to... put your values into a x/y table. | xyseries Date file count  Now you can just calculate your sum/difference of various files as you have them as separate fields. (you might have to fillnull with zero if you have blank spaces).
My additional two cents on that - This combined search is using a subsearch results from which are appended to the results of "main" initial search. You have to understand limitations of subsearches.... See more...
My additional two cents on that - This combined search is using a subsearch results from which are appended to the results of "main" initial search. You have to understand limitations of subsearches. They have limits for returned results (which you might not hit here as you're summarizing the data with stats so you'd be returning just a bunch of rows probably) and - more importantly - execution time. It's important because if your subsearch runs for too long it gets finalized silently which means that only values calculated so far are returned to the outer search and you have no indication whatsoever that the subsearch wasn't allowed to run to its natural end. So in the end you might get no results/incomplete results/wrong results and not be aware of it. Therefore it's advisable to: 1. Keep the searches short (meaning not searching through a lot of data) 2. If possible, use indexed fields (like with the tstats command) 3. If you have two searches which significantly differ in terms of number of results and execution time, use the small/short one as the appended/joined/whatever subsearch. So in case of this particular scenario I'd swap the initial raw data search with tstats to lower the probability of the whole search "running away". index=_internal host=splunk_shc source=*license_usage.log* type=Usage | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour | append [ | tstats count where index=* by host, index, sourcetype ] | stats values(count) as events_latest_hour values(usage_lastest_hour) as usage_lastest_hour by host, index, sourcetype | sort - events_latest_hour, usage_lastest_hour
Hi @ITWhisperer  Its working but if i want to use any multivalue filed in the table.So the result might be affect right.If there is any possible to hide values for particular table using css.
After discussing with AppDynamics support, we found these observations and solution. The reason you are not observing any database calls is that not a single Business Transaction has been detected. ... See more...
After discussing with AppDynamics support, we found these observations and solution. The reason you are not observing any database calls is that not a single Business Transaction has been detected. The agent only detects outgoing calls if they occur within the context of a Business Transaction (BT).   Next steps:  Please apply the below-mentioned node property: Name: find-entry-points Type: Boolean Value: True This property will enable a feature that will recognize all the potential entry points that can be created as custom match rules. For more information on how to set/change node property please check the reference doc: App Agent Node Properties Once you will change the property value, collect the Agent debug log files and share them with us. Then revert the change. Reference doc: How do I find and configure missing entry points? We found a couple of potential entry points that will allow you to monitor JDBC calls by enabling the    Next steps:  Please create BT POJO on the following class/method:   BT-1 Class: com.imi.timer.quartz.cronjobs.PauseCampaignMonitor Method: execute   BT-2 Class: com.imi.timer.quartz.cronjobs.StopCampaignMonitor Method: execute   BT-3 Class: com.imi.timer.quartz.cronjobs.AbandonedTargetsMonitor Method: execute   BT-4 Class: com.imi.imicampaign.security.cache.CustomerDetailsCacheManager Method: updateCustomerKeyDetailsCache   BT-5 Class: com.imi.timer.quartz.cronjobs.InstanceDetailsUpdateMonitor Method: execute   #Doc POJO BT https://docs.appdynamics.com/appd/24.x/24.3/en/application-monitoring/configure-instrumentation/transaction-detection-rules/custom-match-rules/java-business-transaction-detection/pojo-entry-points  
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The pro... See more...
Hello, I am receiving darktrace events through my Edge Processor as a Forwarder and I am a bit new to the SPL2 pipeline. It can probably be solved by transforming something in the pipeline. The problem is that I am indexing events with a _time of -5h and a 2h difference from the event time stamp. Here is an example:   Time in the Edge Processor: It should be noted that the rest of the events that I ingest through this server are arriving at the correct time.
Hi @Real_captain, You can try below; index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | co... See more...
Hi @Real_captain, You can try below; index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | convert timeformat="%Y/%m/%d" ctime(_time) AS Date | stats count(eval(file="RPWARDA")) AS RPWARDA, count(eval(file="SPWARAA")) AS SPWARAA, count(eval(file="SPWARRA")) AS SPWARRA by Date | eval Diff=(RPWARDA-(SPWARAA+SPWARRA))  
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to ... See more...
I see there is a premium app to show CDR data from CUCM but is there a way to view this data from running a search without that app?  I have Splunk setup as a billing server in CUCM but am unable to find any CDR data.  We are using Enterprise on-prem.
Thanks @scelikok  Its working  I am using  coalesce if the PRD success as success message if the error i want to show error msg instead of PRD Error message .So tried like below its not working  ... See more...
Thanks @scelikok  Its working  I am using  coalesce if the PRD success as success message if the error i want to show error msg instead of PRD Error message .So tried like below its not working  | eval output=mvfilter(match(message,"^PRD")) | eval Response= coalesce(error,errorMessage,output)
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format... See more...
Hi Team  I want to know if it is possible to find the count of specific fields and show them in different columns. Example :      For the above example, i want the result in the below format: | Date | File RPWARDA | Count of File SPWARAA |  Count of File SPWARAA | Count of File SPWARRA | Diff (RPWARDA   - ( SPWARAA +SPWARRA ) ) | |2024/04/10 | 49 | 38 | 5 | 6 |   Is it possible using a splunk query ?    Original query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA)) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | eval DIR = if(file="RPWARDA" ,"IN","OUT") | convert timeformat="%Y/%m/%d" ctime(_time) AS Date | stats count by Date , file , DIR  
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-... See more...
I am trying to access ACS services (Admin config services) on splunk cloud trial , But not able to do it ,  After acs login , i am getting an error : linuxadmin@linuxxvz:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=1ccdf228-d137-923d-be35-9eaad590d15c). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error linuxadmin@linuxvm:~$ acs login --token-user test_acs_user Enter Username: sc_admin Enter Password: An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips. { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=5073a1f1-79d0-9ac1-9d9a-675df569846f). Please refer https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." } Error: stack login failed: POST request to "https://admin.splunk.com/prd-p-pg6yq/adminconfig/v2/tokens" failed, code: 500 Internal Server Error Can some one please help here .
How Splunk admin give access for a service account AB-CDRWYVH-L. Access for-  Splunk API read write access
Hi @NReddy12, I never experienced this behavior on a Linux server. The only hint is to open a case to Splunk Support, sending them a diag of your Universal Forwarder. Ciao. Giuseppe
Hi @phanikumarcs  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors