All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

[UPDATE] When I used the same certifiacte for Splunk Web and Splunk Secure Connection, it works normally. But when I used different certificate between Splunk Web and Splunk Secure Connection, the ... See more...
[UPDATE] When I used the same certifiacte for Splunk Web and Splunk Secure Connection, it works normally. But when I used different certificate between Splunk Web and Splunk Secure Connection, the error continued happen. Before, Splunk Web used only TLS Web Server certificatie, and Splunk Secure Connection used certificate that has purpose TLS Web Server and TLS Web Client. And I really need use separate cert for my cases.  
Hi all, I'm trying to get data into CrowdStrike Intel Indicatos Technical Add-On follow this guide in US Commercial 2 cloud enviroment. I realized that I can't find Indicators (Falcon Intelligence) ... See more...
Hi all, I'm trying to get data into CrowdStrike Intel Indicatos Technical Add-On follow this guide in US Commercial 2 cloud enviroment. I realized that I can't find Indicators (Falcon Intelligence) permission of API token like that document mentioned. After that, I found that it has IOCs (Indicators of Compromise), Actors (Falcon Intelligence), Reports (Falcon Intelligence)so I checked that.  But, it still have error "ACCESS DENIED" like:     ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: Error contacting the CrowdStrike Device API, please provide this TraceID to CrowdStrike support = <device_id> ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: Error contacting the CrowdStrike Device API, error message = access denied, authorization failed ERROR pid=6180 tid=MainThread file=base_modinput.py:log_error:317 | CrowdStrike Intel Indicators TA 3.1.3 CrowdStrike_Intel_Indicators: TA is shutting down     I have already used the same API token for CrowdStrike Event Streams Technical Add-On and it works normally. Please help me to fix this! Thank you.
Hi @woodcock  Please tell me how to do this configuration How long and whether we can set how long the log is kept ?  
To dIagnose these problems run the outer search on its own and the inner search on its own. You are using join, which is not necessary and may be the issue depending on your data size. You don't ne... See more...
To dIagnose these problems run the outer search on its own and the inner search on its own. You are using join, which is not necessary and may be the issue depending on your data size. You don't need the table commands all the time and you seem to be duplicating your time parsing (time and _time). Not sure you need reverse either - in the join, you are reversing to get the first timestamp, which in practice without the reverse would be the oldest _time, so you could just to earliest(timestamp) instead without reverse.  
Echoing what @PickleRick said, you can throw hardware at solving the problem, but if your users are doing bad searches (transaction/join) or other poor subsearches, or using lots of poorly chosen wil... See more...
Echoing what @PickleRick said, you can throw hardware at solving the problem, but if your users are doing bad searches (transaction/join) or other poor subsearches, or using lots of poorly chosen wildcards or needing to use eventstats, streamstats, mvexpand, sort a lot, then you will be pushing load to the search heads from the indexers, so adding indexers will not solve the problem and adding search heads will just ensure that you will need to do it again if the same users keep writing bad searches. You really need to look at the monitoring console to identify if there are poor searches running and look at those searches and identify what the performance issue is.  
It's easy enough in a dashboard to create one time range for searches based on an input time range set in the time picker. You just create a global search that uses time picker time and then use addi... See more...
It's easy enough in a dashboard to create one time range for searches based on an input time range set in the time picker. You just create a global search that uses time picker time and then use addinfo to get the epoch range of the time picker and do calculations on that and set appropriate tokens. Here are a some example posts that talk about it in dashboards. https://community.splunk.com/t5/Getting-Data-In/How-to-count-events-for-specific-time-period-now-and-7-days/m-p/633364/highlight/true#M108437 https://community.splunk.com/t5/Dashboards-Visualizations/How-to-create-dynamic-label-based-on-time-input-change/m-p/629246/highlight/true#M51614 https://community.splunk.com/t5/Dashboards-Visualizations/How-convert-input-date-time-to-token-values-and-display-on/m-p/612985/highlight/true#M50281  
@merrelr Edit permission of below collections and make them readable to user roles splunk-dashboard-images splunk-dashboard-images
its been 4 days, but still no response from Splunk Slack Admins. Any ideas, suggestions on how to proceed, please, thanks. 
Ok, thank you.
No. You can't do that using tstats. You can do the search using | datamodel Malware search or | from datamodel=Malware and then do normal stats but this way you won't be able to leverage the acc... See more...
No. You can't do that using tstats. You can do the search using | datamodel Malware search or | from datamodel=Malware and then do normal stats but this way you won't be able to leverage the acceleration of summaries. You could try to append two separate tstats (one with filenames and one without) using tstats in prestats=t and append=t but that's some very confusing functionality.
When running a query with distinctcount in the UI it works as expected and returns multiple results. But when run via API I only get 20 results no matter the limit: curl --location --request PO... See more...
When running a query with distinctcount in the UI it works as expected and returns multiple results. But when run via API I only get 20 results no matter the limit: curl --location --request POST 'https://analytics.api.appdynamics.com/events/query?end=1700243357711&limit=100&start=1700239757711' \ --header 'PRIVATE-TOKEN: glpat-cQefeyVa51vG__mLtqM6' \ --header 'X-Events-API-Key: $API_KEY' \ --header 'X-Events-API-AccountName: $ACCOUNT_NAME' \ --header 'Content-Type: application/vnd.appd.events+json;v=2' \ --header 'Accept: application/vnd.appd.events+json;v=2' \ --header 'Authorization: Basic <bearer>' \ --data-raw '{"query":"SELECT toString(eventTimestamp), segments.node, requestGUID, distinctcount(segments.tier) FROM transactions","mode":"scroll"}   result: [ { "label": "0", "fields": [ { "label": "toString(eventTimestamp)", "field": "toString(eventTimestamp)", "type": "string" }, { "label": "segments.node", "field": "segments.node", "type": "string" }, { "label": "requestGUID", "field": "requestGUID", "type": "string" }, { "label": "distinctcount(segments.tier)", "field": "segments.tier", "type": "integer", "aggregation": "distinctcount" } ], "results": [ [ "2023-11-17T16:55:14.472Z", "node--1", "82bbb595-88b7-4e81-9ded-56cb5a42c251", 1 ], [ "2023-11-17T16:55:14.472Z", "node--7", "c7785e77-deb9-4aff-93fe-35efa7299871", 1 ], [ "2023-11-17T16:55:22.777Z", "node--7", "3c22d9dd-74a3-496c-b8b9-1c4ce48a6b1f", 1 ], [ "2023-11-17T16:55:22.777Z", "node--7", "d86e97ff-91c8-45f9-832e-ec6f06ffff26", 1 ], [ "2023-11-17T16:55:29.959Z", "node--1", "44abeb53-30e6-4973-8afb-f7cfc52a9ba7", 1 ], [ "2023-11-17T16:55:29.959Z", "node--1", "b577b0b0-2d41-4dfa-aceb-e42b5bb39348", 1 ], [ "2023-11-17T16:56:55.468Z", "node--1", "2f92e785-8cd0-4028-acd7-21fa794eb004", 1 ], [ "2023-11-17T16:56:55.468Z", "node--1", "af7ca09b-c4fc-4c73-8502-26d92b2b5835", 1 ], [ "2023-11-17T16:58:13.694Z", "node--1", "4a04304b-f48f-4c8d-9034-922077dfcdb4", 1 ], [ "2023-11-17T16:58:13.694Z", "node--7", "22755c60-efa0-434f-be73-0f02f0222021", 1 ], [ "2023-11-17T17:00:36.983Z", "node--1", "b6386249-6408-4517-812c-ca3d5e6c304c", 1 ], [ "2023-11-17T17:00:36.983Z", "node--1", "f7c63152-b569-42bf-8d37-32bb181d7028", 1 ], [ "2023-11-17T17:05:50.737Z", "node--1", "306c7cb3-0eff-440e-96e1-4ad61ce01a0c", 1 ], [ "2023-11-17T17:05:50.737Z", "node--1", "b33c04bd-dfd6-452a-b425-7639dad0422d", 1 ], [ "2023-11-17T17:08:24.554Z", "node--7", "0ed8c3a6-a59c-4fee-a10e-e542c239aa94", 1 ], [ "2023-11-17T17:08:24.554Z", "node--7", "c0e6f198-d388-49da-96a6-d9d95868c40f", 1 ], [ "2023-11-17T17:10:06.483Z", "node--1", "cf3caef6-2688-4eac-a2a1-b2edb55f0569", 1 ], [ "2023-11-17T17:10:06.483Z", "node--7", "40f5a786-e12d-4b0c-a54e-a330e7b7afd1", 1 ], [ "2023-11-17T17:11:47.167Z", "node--7", "02c3a025-6def-499d-9315-68551df639d4", 1 ], [ "2023-11-17T17:11:47.167Z", "node--7", "f19df1e3-983d-4a72-a5e0-a3f9ffbb7afe", 1 ] ], "moreData": true, "schema": "biz_txn_v1" } ]
Thank you. That works for fields (like signature in this example) which are directly available from the data model. But if we want to create new fields within the search (like grouping_signature ... See more...
Thank you. That works for fields (like signature in this example) which are directly available from the data model. But if we want to create new fields within the search (like grouping_signature in this example) to perform some calculations using eval or string concatenations and use them to do a group by, how could we accomplish that in the tstats query?   In this example, I want to use eval to concatenate signature and file_name fields into a new field called grouping_signature and then use the new field for the group by. If the file_name is not present, then only use signature for the group by(that's why eval to perform that check).
OK. I never remember the proper syntax from tstatsing from datamodel so you might need to correct this a bit but you'd probably want something like this | tstats values(Malware.dest) as hosts from d... See more...
OK. I never remember the proper syntax from tstatsing from datamodel so you might need to correct this a bit but you'd probably want something like this | tstats values(Malware.dest) as hosts from datamodel=Malware.something by Malware.signature This will give you list of hosts by each signature from a given period. Now you might want to put it through | where mvcount(hosts)>4 or something like that. You can't do complicated aggregations with tstats - that's why you should normalize your data. That's what the whole datamodel is for.
Ok, I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a ... See more...
Ok, I have tried to simplify the query for better understanding and removing some unnecessary things. This query is to find out if the same malware has been found on more than 4 hosts (dest) in a given time span, something like a malware outbreak. Below is the indexed based query that works fine. I am trying to convert this to a data model based query. (`cim_Malware_indexes`) tag=malware tag=attack | eval grouping_signature=if(isnotnull(file_name),signature . ":" . file_name,signature) => trying to create a new field called "grouping_signature" by concatenating signature and file_name fields | stats count dc(dest) as infected_device_count BY grouping_signature => trying to calculate the distinct count of hosts the have the same malware found on them by "grouping_signature" field | where infected_device_count > 4 => trying to find events where number of infected devices is greater than 4 | stats sum(count) AS "count" sum(infected_device_count) AS infected_device_count BY grouping_signature => trying to find the total number of infected hosts by "grouping_signature" field
Hi @Oti47, Edit: This answer applies to Simple XML dashboards. Dashboard Studio may be limited to a single category field, as @richgalloway noted. Edit 2: Snap. The Dashboard Studio scatter visuali... See more...
Hi @Oti47, Edit: This answer applies to Simple XML dashboards. Dashboard Studio may be limited to a single category field, as @richgalloway noted. Edit 2: Snap. The Dashboard Studio scatter visualization is limited to x, y, and category fields in that order. The search fragment for the scatter chart visualization provides a hint: | stats x_value_aggregation y_value_aggregation by name_category [comparison_category] If you're using a report, output from the inputlookup or table command, etc., make sure the fields are in name_category, comparison_category, x_value_aggregation, y_value-aggregation order by applying the stats command: | stats values(category) as category values(value) as value by PartnerId article where PartnerId and article are categorical values and category and value are numerical values. You can re-order the aggregation fields, category and value, and the split-by fields, PartnerId and article, as needed for your intended display: | stats values(value) as value values(category ) as category by article PartnerId Reformatting the output with the stats command adds the user interface field metadata (groupby_rank) used by the visualization to identify the name_category ("groupby_rank": 0) and comparison_category ("groupby_rank": 1) fields. As a rule of thumb, the commands referenced in a visualization's search fragment will produce the desired result. Visualizations that reference chart, stats, timechart, xyseries, etc. most likely use internal metadata to format their output. You can then use drilldown tokens associated with the data. See <https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#Predefined_drilldown_tokens>. $click.value$ should represent the scatter chart name_category value, and $click.value2$ should represent the scatter chart comparison_category value.
Yes. That is correct. I forgot about how I must manage the new indexes.conf and push it because I faced many issues when the indexes.conf on indexers doesn't match with indexes.conf on CM. I'll be v... See more...
Yes. That is correct. I forgot about how I must manage the new indexes.conf and push it because I faced many issues when the indexes.conf on indexers doesn't match with indexes.conf on CM. I'll be very grateful If also @woodcock gives me an advice
Yes I did not round the max after eventstats and I am able to post stats without rounding. I tested the _raw data from earlier and it does work with this search, showing the min, max and avg properl... See more...
Yes I did not round the max after eventstats and I am able to post stats without rounding. I tested the _raw data from earlier and it does work with this search, showing the min, max and avg properly.  | makeresults | eval data=mvappend("{\"dsnames\": [\"read\", \"write\"], \"values\": [123, 234]}", "{\"dsnames\": [\"read\", \"write\"], \"values\": [456, 567]}") | mvexpand data | rename data as _raw | spath | eval data = mvappend(json_object("dsname", mvindex('dsnames{}', 0), "value", mvindex('values{}', 0)), json_object("dsname", mvindex('dsnames{}', 1), "value", mvindex('values{}', 1))) | mvexpand data | spath input=data | stats min(value) as min max(value) as max avg(value) as avg by dsname | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)     The other way to sort and compare with max does give me results. {"dsname":"read","value":"0"} {"values":[0,23347.1366453364],"dstypes":["derive","derive"],"dsnames":["read","write"],"time":1700387069.996,"interval":10.000,"host":"usorla7sp103x.ad101.siemens-energy.net","plugin":"disk","plugin_instance":"dm-0","type":"disk_octets","type_instance":""} I am still not sure why Max would still be the same as those values should be different just on the basis that the "maximum number of disk operations or disk time for operations or disk traffic" should be different for read and written data, logically speaking.  
1. Why do the | eval dest=lower(dest) ? CIM is for normalizing your data. Do it properly - unify the case of your names. 2. if(isnotnull ... can be expressed more clearly with coalesce(). 3. You're... See more...
1. Why do the | eval dest=lower(dest) ? CIM is for normalizing your data. Do it properly - unify the case of your names. 2. if(isnotnull ... can be expressed more clearly with coalesce(). 3. You're searching from CIM indexes but you're manually doing things like | rename computerDnsName as dest. You should have done that as calculated field to make your data CIM-compliant. So first you should make your data CIM-compliant then tell us what you want to achieve.
I'm not sure if there was any modification to the copy-pasted config and/or events but your regex doesn't allow for spaces between the semicolon after the key name and the value.
Moving indexes on a working cluster is a tricky thing to do since: 1) You have to physically move the data 2) You have to push the indexes.conf from the CM 3) The indexes.conf has to be consistent... See more...
Moving indexes on a working cluster is a tricky thing to do since: 1) You have to physically move the data 2) You have to push the indexes.conf from the CM 3) The indexes.conf has to be consistent across the whole cluster. So the issue is very tricky and I'd do a lot of testing before attempting it on prod environment. You could get away with taking the whole cluster down, moving the data around physically and deploying "fixed" indexes.conf both on the CM and on each individual indexer. But again - testing, testing, testing. There are many things that could go wrong here. Generally, the best practice would be to leave those indexes alone and don't move them around - if there is a _new_ requirement, just create a new index on a new storage unit, set the proper size/age constraints and stick with it.