All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Machine agent is not starting. I downloaded the machine agent using AppDynamics login, which provided me with pre-configured setup for my account. When I try to run the Agent using java - Machineagen... See more...
Machine agent is not starting. I downloaded the machine agent using AppDynamics login, which provided me with pre-configured setup for my account. When I try to run the Agent using java - Machineagent.jar, I only see below details. The agent is not initializing: 2023-11-20 11:27:49.417 Using Java Version [11.0.20] for Agent 2023-11-20 11:27:49.417 Using Agent Version [Machine Agent v23.10.0.3810 GA compatible with 4.4.1.0 Build Date 2023-10-30 07:13:09] Earlier the agent was starting but was not reporting CPU, Disk and Memory metrics. It only showed the running process but no metrics data. Please suggest
Below query is producing the results  index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | search job_name=*"Group06"* OR job_name=*"Group01"* ... See more...
Below query is producing the results  index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | search job_name=*"Group06"* OR job_name=*"Group01"* | head 2 | dedup build_number | stats sum(test_summary.passes) as Pass | fillnull value="Test Inprogress..." Pass but not this query. $group$ - dropdown selected option is Group06 index="jenkins" sourcetype="json:jenkins" job_name="$env$_Group*" event_tag=job_event type=completed | eval rerunGroup = case("$group$"=="Group06", "Group01", "$group$"=="Group07", "Group02", "$group$"=="Group08", "Group03", "$group$"=="Group09", "Group04", "$group$"=="Group10", "Group05",1==1, "???") |''' table rerunGroup - This shows Group01 in the table ''' | search job_name=*$group$* OR job_name=*rerunGroup* | head 2 | dedup build_number | stats sum(test_summary.passes) as Pass | fillnull value="Test Inprogress..." Pass No big difference except Eval statement and passing the variable value.  Can someone please help
Hello Experts,   I was wondering if you can help me figure out how do I show the merged values in a field as 'unmerged' when use 'values' in stats command   (DETAILS_SVC_ERROR) and (FARE/... See more...
Hello Experts,   I was wondering if you can help me figure out how do I show the merged values in a field as 'unmerged' when use 'values' in stats command   (DETAILS_SVC_ERROR) and (FARE/PRCNG/AVL-MULT. RSNS) are different values .... coming as merged as an example, its merging all values in one when used "Values" OR "List" how to unmerge same If I use 'mvexpand' it then expands to single count even if the values are same   Thanks in advance Nishant
This doesnt show you the Total, Total should mean here (txnStatus=FAILED+txnStatus="SUCCEEDED")  With above solution the Total is only the total of 'FAILED' in txnStatus I want total to be the ... See more...
This doesnt show you the Total, Total should mean here (txnStatus=FAILED+txnStatus="SUCCEEDED")  With above solution the Total is only the total of 'FAILED' in txnStatus I want total to be the absolute total (FAILED + SUCCEEDED)
I, too, found this very helpful. Thanks guys.
Hi @richgalloway  I have changed the stanza to script from monitor but still unable to see any data in splunk? Is there anything else I have to check?
[UPDATE] When I used the same certifiacte for Splunk Web and Splunk Secure Connection, it works normally. But when I used different certificate between Splunk Web and Splunk Secure Connection, the ... See more...
[UPDATE] When I used the same certifiacte for Splunk Web and Splunk Secure Connection, it works normally. But when I used different certificate between Splunk Web and Splunk Secure Connection, the error continued happen. Before, Splunk Web used only TLS Web Server certificatie, and Splunk Secure Connection used certificate that has purpose TLS Web Server and TLS Web Client. And I really need use separate cert for my cases.  
Hi all, I'm trying to get data into CrowdStrike Intel Indicatos Technical Add-On follow this guide in US Commercial 2 cloud enviroment. I realized that I can't find Indicators (Falcon Intelligence) ... See more...
Hi all, I'm trying to get data into CrowdStrike Intel Indicatos Technical Add-On follow this guide in US Commercial 2 cloud enviroment. I realized that I can't find Indicators (Falcon Intelligence) permission of API token like that document mentioned. After that, I found that it has IOCs (Indicators of Compromise), Actors (Falcon Intelligence), Reports (Falcon Intelligence)so I checked that.  But, it still have error "ACCESS DENIED" like:     ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: Error contacting the CrowdStrike Device API, please provide this TraceID to CrowdStrike support = <device_id> ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: Error contacting the CrowdStrike Device API, error message = access denied, authorization failed ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: TA is shutting down     I have already used the same API token for CrowdStrike Event Streams Technical Add-On and it works normally. Please help me to fix this! Thank you.
Hi @woodcock  Please tell me how to do this configuration How long and whether we can set how long the log is kept ?  
To dIagnose these problems run the outer search on its own and the inner search on its own. You are using join, which is not necessary and may be the issue depending on your data size. You don't ne... See more...
To dIagnose these problems run the outer search on its own and the inner search on its own. You are using join, which is not necessary and may be the issue depending on your data size. You don't need the table commands all the time and you seem to be duplicating your time parsing (time and _time). Not sure you need reverse either - in the join, you are reversing to get the first timestamp, which in practice without the reverse would be the oldest _time, so you could just to earliest(timestamp) instead without reverse.  
Echoing what @PickleRick said, you can throw hardware at solving the problem, but if your users are doing bad searches (transaction/join) or other poor subsearches, or using lots of poorly chosen wil... See more...
Echoing what @PickleRick said, you can throw hardware at solving the problem, but if your users are doing bad searches (transaction/join) or other poor subsearches, or using lots of poorly chosen wildcards or needing to use eventstats, streamstats, mvexpand, sort a lot, then you will be pushing load to the search heads from the indexers, so adding indexers will not solve the problem and adding search heads will just ensure that you will need to do it again if the same users keep writing bad searches. You really need to look at the monitoring console to identify if there are poor searches running and look at those searches and identify what the performance issue is.  
It's easy enough in a dashboard to create one time range for searches based on an input time range set in the time picker. You just create a global search that uses time picker time and then use addi... See more...
It's easy enough in a dashboard to create one time range for searches based on an input time range set in the time picker. You just create a global search that uses time picker time and then use addinfo to get the epoch range of the time picker and do calculations on that and set appropriate tokens. Here are a some example posts that talk about it in dashboards. https://community.splunk.com/t5/Getting-Data-In/How-to-count-events-for-specific-time-period-now-and-7-days/m-p/633364/highlight/true#M108437 https://community.splunk.com/t5/Dashboards-Visualizations/How-to-create-dynamic-label-based-on-time-input-change/m-p/629246/highlight/true#M51614 https://community.splunk.com/t5/Dashboards-Visualizations/How-convert-input-date-time-to-token-values-and-display-on/m-p/612985/highlight/true#M50281  
@merrelr Edit permission of below collections and make them readable to user roles splunk-dashboard-images splunk-dashboard-images
its been 4 days, but still no response from Splunk Slack Admins. Any ideas, suggestions on how to proceed, please, thanks. 
Ok, thank you.
No. You can't do that using tstats. You can do the search using | datamodel Malware search or | from datamodel=Malware and then do normal stats but this way you won't be able to leverage the acc... See more...
No. You can't do that using tstats. You can do the search using | datamodel Malware search or | from datamodel=Malware and then do normal stats but this way you won't be able to leverage the acceleration of summaries. You could try to append two separate tstats (one with filenames and one without) using tstats in prestats=t and append=t but that's some very confusing functionality.
When running a query with distinctcount in the UI it works as expected and returns multiple results. But when run via API I only get 20 results no matter the limit: curl --location --request PO... See more...
When running a query with distinctcount in the UI it works as expected and returns multiple results. But when run via API I only get 20 results no matter the limit: curl --location --request POST 'https://analytics.api.appdynamics.com/events/query?end=1700243357711&limit=100&start=1700239757711' \ --header 'PRIVATE-TOKEN: glpat-cQefeyVa51vG__mLtqM6' \ --header 'X-Events-API-Key: $API_KEY' \ --header 'X-Events-API-AccountName: $ACCOUNT_NAME' \ --header 'Content-Type: application/vnd.appd.events+json;v=2' \ --header 'Accept: application/vnd.appd.events+json;v=2' \ --header 'Authorization: Basic <bearer>' \ --data-raw '{"query":"SELECT toString(eventTimestamp), segments.node, requestGUID, distinctcount(segments.tier) FROM transactions","mode":"scroll"}   result: [ { "label": "0", "fields": [ { "label": "toString(eventTimestamp)", "field": "toString(eventTimestamp)", "type": "string" }, { "label": "segments.node", "field": "segments.node", "type": "string" }, { "label": "requestGUID", "field": "requestGUID", "type": "string" }, { "label": "distinctcount(segments.tier)", "field": "segments.tier", "type": "integer", "aggregation": "distinctcount" } ], "results": [ [ "2023-11-17T16:55:14.472Z", "node--1", "82bbb595-88b7-4e81-9ded-56cb5a42c251", 1 ], [ "2023-11-17T16:55:14.472Z", "node--7", "c7785e77-deb9-4aff-93fe-35efa7299871", 1 ], [ "2023-11-17T16:55:22.777Z", "node--7", "3c22d9dd-74a3-496c-b8b9-1c4ce48a6b1f", 1 ], [ "2023-11-17T16:55:22.777Z", "node--7", "d86e97ff-91c8-45f9-832e-ec6f06ffff26", 1 ], [ "2023-11-17T16:55:29.959Z", "node--1", "44abeb53-30e6-4973-8afb-f7cfc52a9ba7", 1 ], [ "2023-11-17T16:55:29.959Z", "node--1", "b577b0b0-2d41-4dfa-aceb-e42b5bb39348", 1 ], [ "2023-11-17T16:56:55.468Z", "node--1", "2f92e785-8cd0-4028-acd7-21fa794eb004", 1 ], [ "2023-11-17T16:56:55.468Z", "node--1", "af7ca09b-c4fc-4c73-8502-26d92b2b5835", 1 ], [ "2023-11-17T16:58:13.694Z", "node--1", "4a04304b-f48f-4c8d-9034-922077dfcdb4", 1 ], [ "2023-11-17T16:58:13.694Z", "node--7", "22755c60-efa0-434f-be73-0f02f0222021", 1 ], [ "2023-11-17T17:00:36.983Z", "node--1", "b6386249-6408-4517-812c-ca3d5e6c304c", 1 ], [ "2023-11-17T17:00:36.983Z", "node--1", "f7c63152-b569-42bf-8d37-32bb181d7028", 1 ], [ "2023-11-17T17:05:50.737Z", "node--1", "306c7cb3-0eff-440e-96e1-4ad61ce01a0c", 1 ], [ "2023-11-17T17:05:50.737Z", "node--1", "b33c04bd-dfd6-452a-b425-7639dad0422d", 1 ], [ "2023-11-17T17:08:24.554Z", "node--7", "0ed8c3a6-a59c-4fee-a10e-e542c239aa94", 1 ], [ "2023-11-17T17:08:24.554Z", "node--7", "c0e6f198-d388-49da-96a6-d9d95868c40f", 1 ], [ "2023-11-17T17:10:06.483Z", "node--1", "cf3caef6-2688-4eac-a2a1-b2edb55f0569", 1 ], [ "2023-11-17T17:10:06.483Z", "node--7", "40f5a786-e12d-4b0c-a54e-a330e7b7afd1", 1 ], [ "2023-11-17T17:11:47.167Z", "node--7", "02c3a025-6def-499d-9315-68551df639d4", 1 ], [ "2023-11-17T17:11:47.167Z", "node--7", "f19df1e3-983d-4a72-a5e0-a3f9ffbb7afe", 1 ] ], "moreData": true, "schema": "biz_txn_v1" } ]
Thank you. That works for fields (like signature in this example) which are directly available from the data model. But if we want to create new fields within the search (like grouping_signature ... See more...
Thank you. That works for fields (like signature in this example) which are directly available from the data model. But if we want to create new fields within the search (like grouping_signature in this example) to perform some calculations using eval or string concatenations and use them to do a group by, how could we accomplish that in the tstats query?   In this example, I want to use eval to concatenate signature and file_name fields into a new field called grouping_signature and then use the new field for the group by. If the file_name is not present, then only use signature for the group by(that's why eval to perform that check).
OK. I never remember the proper syntax from tstatsing from datamodel so you might need to correct this a bit but you'd probably want something like this | tstats values(Malware.dest) as hosts from d... See more...
OK. I never remember the proper syntax from tstatsing from datamodel so you might need to correct this a bit but you'd probably want something like this | tstats values(Malware.dest) as hosts from datamodel=Malware.something by Malware.signature This will give you list of hosts by each signature from a given period. Now you might want to put it through | where mvcount(hosts)>4 or something like that. You can't do complicated aggregations with tstats - that's why you should normalize your data. That's what the whole datamodel is for.
Ok, I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a ... See more...
Ok, I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a given time span, something like a malware outbreak. Below is the indexed based query that works fine. I am trying to convert this to a data model based query. (`cim_Malware_indexes`) tag=malware tag=attack | eval grouping_signature=if(isnotnull(file_name),signature . ":" . file_name,signature) => trying to create a new field called "grouping_signature" by concatenating signature and file_name fields | stats count dc(dest) as infected_device_count BY grouping_signature => trying to calculate the distinct count of hosts the have the same malware found on them by "grouping_signature" field | where infected_device_count > 4 => trying to find events where number of infected devices is greater than 4 | stats sum(count) AS "count" sum(infected_device_count) AS infected_device_count BY grouping_signature => trying to find the total number of infected hosts by "grouping_signature" field