All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you post your existing XML it would be helpful, but I am assuming you have something like <drilldown> <set token="token_icid">$row.icid$</set> </drilldown> so there are a number of ways to do ... See more...
If you post your existing XML it would be helpful, but I am assuming you have something like <drilldown> <set token="token_icid">$row.icid$</set> </drilldown> so there are a number of ways to do what you want, but one way is to make and additional constraint for icid that is either empty or the check, as the rest of the search is the same. <drilldown> <set token="token_icid">$row.icid$</set> <eval token="token_query">if($row.icid$=0, "icid=\"".$row.icid$."\" OR ", "")</eval> </drilldown> Then your search can be index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ($token_query$ mid="$token_mid$" OR "MID $token_mid$") so you just add $token_query$ which is either empty or the addition icid constraint.  
Your example is a little unclear, because it stats index=other has i-abcdef1234567 but in the next statement sats it is filtered out i-abcdef1234567  because it was NOT in index=other Hopefully the ... See more...
Your example is a little unclear, because it stats index=other has i-abcdef1234567 but in the next statement sats it is filtered out i-abcdef1234567  because it was NOT in index=other Hopefully the following example demonstrates the principle. I am using makeresults to simulate your data set. The stats values combines the two and then the where clause is what you use for your exclusion logic. If that is not correct based on the above discrepancy, adjust as necessary. You can remove the where clause to see what the data looks like first   | makeresults | eval index="main", ResourceId=split("i-1234567abcdef,i-abcdef1234567,sg-12345abcde,abc", ",") | mvexpand ResourceId | append [ | makeresults ``` and the index=other search returns InstanceId: i-abcdef1234567 ``` | eval index="other", InstanceId=split("i-abcdef1234567,i-abcdef1234569",",") ] | fields - _time ``` The above is just simulating your data setup ``` | eval ResourceId=coalesce(ResourceId, InstanceId) | stats values(index) as index dc(index) as indexes by ResourceId | where (indexes=1 AND index="main") OR indexes=2 ``` I need the results to be (filtered out i-1234567abcdef because it was not returned by index=other): i-abcdef1234567 sg-12345abcde```  
It's common for knowledge objects in Splunk Cloud to be undeletable if they are defined in the default directory of an app.  In that case, you must edit the app and re-upload it. Some indexes cannot... See more...
It's common for knowledge objects in Splunk Cloud to be undeletable if they are defined in the default directory of an app.  In that case, you must edit the app and re-upload it. Some indexes cannot be deleted because they are system-defined.
Thank you for your response. I will check it out 
Before volunteers can help you achieve something, you need to explain what is it that you are trying to achieve without SPL (or ChatGPT). What do you mean "in a drilldown?"  You can have a drilldow... See more...
Before volunteers can help you achieve something, you need to explain what is it that you are trying to achieve without SPL (or ChatGPT). What do you mean "in a drilldown?"  You can have a drilldown only when you have an initial search (in a dashboard panel).  What are the output of that search look like?  Your code snippets suggest that you want to set tokens from that output.  Is this correct?  Which column from the initial search is designated to populate which token? What do you mean by "2 possible queries" when you "have a (aka ONE) drilldown?"  Do you mean you have two other panels on the same dashboard that could use the token(s) populated by this drilldown? Again, take away SPL, can you illustrate some data from the initial panel (anonymize as needed), then illustrate (aka tabulate) the end state of the two panels you wish to alter with this drilldown, and explain how the data is related to the end state (without SPL)? If any SPL is "not working", you need to explain/illustrate data, then describe/illustrate actual output, illustrate expected output, explain why it is reasonable to arrive at that expected output.  Sometimes you also need to explain how the two outputs are different if it is not painfully obvious.
Sorry, I made a typo in the search time that gets me what I need it was supposed to say: | eval CommandHistory = commandHistory_sed I can make the effect happen in search time, the issue is I n... See more...
Sorry, I made a typo in the search time that gets me what I need it was supposed to say: | eval CommandHistory = commandHistory_sed I can make the effect happen in search time, the issue is I need to figure out how to have this effect applied at ingest time so the effect is automatically applied to all of the events.
The rex command needs the name of an existing field in the field option.  Try this | eval commandHistory = CommandHistory | rex field=commandHistory mode=sed "s/\¶/\n/g"  
Hello, Currently I'm attempting to make a CommandHistory field a bit more readable for our analysts but I'm having trouble getting the formatting correct or maybe I'm just using the wrong command ... See more...
Hello, Currently I'm attempting to make a CommandHistory field a bit more readable for our analysts but I'm having trouble getting the formatting correct or maybe I'm just using the wrong command or taking the wrong approach. Basically our EDR dumps recent commands ran on a system into the CommandHistory field separated by a ¶ symbol. I'm trying to just replace that with a new line at ingestion time.  Made up example of what's in CommandHistory at the moment (I don't want to use real data I apologize): command1 -q lifeishard¶ReallyLong Command -t LifeIsHarderWhenYouCantFigureItOut¶ThirdCommand -u switchesare -cool¶One more command The search time commands that get me what I want in a field called commandHistory_sed: | eval commandHistory = CommandHistory | rex field=commandHistory_sed mode=sed "s/\¶/\n/g" This ends up looking like this: command1 -q lifeishard ReallyLong Command -t LifeIsHarderWhenYouCantFigureItOut ThirdCommand -u switchesare -cool One more command What I've tried in props.conf:  SEDCMD-substitute = 's/\¶/\n/g'  SEDCMD-alter = 's/\¶/\n/g' Neither work. We have many other Eval and FIELDALIAS statements under this sourcetype in props.conf that are functioning fine so I think I'm just not formatting the SED properly or I'm not taking the right approach. Does anyone have any advice on what I am doing wrong and what I need to do to achieve the result? Thank you for any help in advance!
When it comes to pulling in requirements for a dashboard, the number one requirement I have is "what is the core point of the overall dashboard?"  Your questions are good, but may create a "metrics a... See more...
When it comes to pulling in requirements for a dashboard, the number one requirement I have is "what is the core point of the overall dashboard?"  Your questions are good, but may create a "metrics at the cost of use" mindset where people are so worried about building metrics and pre-answering questions that they don't just sit down and build something useful, quickly.  Also, your approach may work against interrelated dashboards that drill down from one to the other, a good usage model I've used very effectively, and you also see on the monitoring console (MC).   Dashboards aren't something to be greedy or stingy about.  Let people build them.   That said, my core rule of thumb is something you may not have noticed:  Good dashboards minimize scrolling.  That means that a well written dashboard, users don't have to scroll (much).  Use pop-up panels (with close links for the pop-up), use drill downs to other dashboards or just build highly targeted dashboards.  It helps, a lot.
The question of which inputs to enable or not is always a "the inputs that provide the logs you care about".  You don't have to worry about the HF vs UF question in this area. That said, make sure y... See more...
The question of which inputs to enable or not is always a "the inputs that provide the logs you care about".  You don't have to worry about the HF vs UF question in this area. That said, make sure you aren't routing through a HF just for the sake of it.  HFs should only be typically used for one of a few reasons: When you cannot route (actual routing, not just firewall) from the UF to the IDX (either on time schedule or permanently) When you need modular inputs Almost every other scenario, it's best to not go through a HF.  Your question makes me suspect you may be routing through a HF. When a HF (or IDX) receives logs from another source, it will "cook" or parse the logs then send them according to the outputs.conf.  There are no log types being discussed here that put you in danger of a logarithmic volume growth or logging loop.   
I have created some indexes on splunk cloud can we not delete this indexes ? Because the option for delete is disabled in splunk cloud , can anyone help with this ?   
Thank you @ITWhisperer 
Hi, Ironstream isn't a Splunk product, and IBM mainframes and minis are protected by a rather large paywall. Have you talked to your Ironsteam account/support team? Ironstream documentation is open,... See more...
Hi, Ironstream isn't a Splunk product, and IBM mainframes and minis are protected by a rather large paywall. Have you talked to your Ironsteam account/support team? Ironstream documentation is open, but I don't see a reference to ESDS or other data sets. If Ironstream can read a data set and deserialize its records to UTF-8, it should be technically possible for Splunk to receive the data.
Would the Akamai add-on work with the Akamai Prolexic Analytics API ?
In a drilldown, I have 2 possible queries and they look like: qry1=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( mid="$token_mid$" OR "MID $token_mid$"... See more...
In a drilldown, I have 2 possible queries and they look like: qry1=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( mid="$token_mid$" OR "MID $token_mid$") qry2=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND (icid="$token_icid$" OR mid="$token_mid$" OR "MID $token_mid$") if "$token_icid$==0 execute qry1 else execute qry2 How it can be achieve ? Chatgtp give this answer but not working index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( (($token_icid$=="0") AND (mid="$token_mid$")) OR (($token_icid$!="0") AND (icid="$token_icid$")) OR mid="$token_mid$" OR "MID $token_mid$" )
That space is not the issue @gcusello . Mistake happened since I took pic of source code, then extracted text from those pics via open source website & later pasted here so during this process only t... See more...
That space is not the issue @gcusello . Mistake happened since I took pic of source code, then extracted text from those pics via open source website & later pasted here so during this process only that mistake occurred. Thanks    
@richgalloway  This is to identify possible lateral movement attacks that involve the spawning of a PowerShell process as a child or grandchild process of commonly abused processes. These processes ... See more...
@richgalloway  This is to identify possible lateral movement attacks that involve the spawning of a PowerShell process as a child or grandchild process of commonly abused processes. These processes include services.exe, wmiprsve.exe, svchost.exe, wsmprovhost.exe, and mmc.exe.   Such behavior is indicative of legitimate Windows features such as the Service Control Manager, Windows Management Instrumentation, Task Scheduler, Windows Remote Management, and the DCOM protocol being abused to start a process on a remote endpoint. This behavior is often seen during lateral movement techniques where adversaries or red teams abuse these services for lateral movement and remote code execution. thanks    
Hi, Can you provide a sample of the raw data? It's probably JSON assuming what you've posted is from Splunk's "List" view. The spath command in your search also expects _raw (by default) to be JSON.... See more...
Hi, Can you provide a sample of the raw data? It's probably JSON assuming what you've posted is from Splunk's "List" view. The spath command in your search also expects _raw (by default) to be JSON. If that's the case, the fields aren't empty. They have a literal hyphen as their value. For example: {"body_bytes_sent": "0", "bytes_sent": "0", "host": "nice_host", "http_content_type": "-", "http_referer": "-", "http_user_agent": "-", "kong_request_id": "8853b73ffef1c5522b4a383c286c825e", "log_type": "kong", "query_string": "-", "remote_addr": "10.138.100.153", "request_id": "93258e0bc529fa9844e0fd2d69168d0f", "request_length": "1350", "request_method": "GET", "request_time": "0.162", "scheme": "https", "server_addr": "10.138.100.151", "server_protocol": "HTTP/1.1", "status": "499", "time_local": "25/Feb/2024:05:11:24 +0000", "upstream_addr": "10.138.103.157:8080", "upstream_host": "nice_host", "upstream_response_time": "0.000", "uri": "/v1/d5a413b6-7d00-4874-b706-17b15b7a140b"} {"body_bytes_sent": "0", "bytes_sent": "0", "host": "nice_host", "http_content_type": "-", "http_referer": "-", "http_user_agent": "-", "kong_request_id": "89cea871feba9f2d5216856f7a884223", "log_type": "kong", "query_string": "productType=ALL", "remote_addr": "10.138.100.214", "request_id": "9dbf69defb49a3595cf1040e6ab5d4f2", "request_length": "1366", "request_method": "GET", "request_time": "0.167", "scheme": "https", "server_addr": "10.138.100.151", "server_protocol": "HTTP/1.1", "status": "499", "time_local": "25/Feb/2024:05:11:24 +0000", "upstream_addr": "10.138.98.140:8080", "upstream_host": "nice_host", "upstream_response_time": "0.000", "uri": "/v1/a8b7570f-d0af-4d0d-bd6d-f6cf31892267"} You can search for the literal value directly: query_string=- or query_string="-" There is a caveat: the hyphen is a minor breaker and isn't indexed by Splunk as a term. All events will be returned initially, the query_string field will be extracted, and its value will be scanned for a hyphen to filter results. If your JSON fields aren't auto-extracted, we can investigate your inputs.conf and props.conf settings.
Hi, We can use the cmdb_ci and cmdb_rel_ci tables to analyze CI relationships. For this example, we'll use Splunk Add-on for ServiceNow 7.7.0 with the cmdb_ci and cmdb_rel_ci inputs configured and e... See more...
Hi, We can use the cmdb_ci and cmdb_rel_ci tables to analyze CI relationships. For this example, we'll use Splunk Add-on for ServiceNow 7.7.0 with the cmdb_ci and cmdb_rel_ci inputs configured and enabled. The number and types of relationships will vary depending on our model. We'll use the relationships described in the ServiceNow Common Service Data Model at https://docs.servicenow.com/bundle/washingtondc-servicenow-platform/page/product/csdm-implementation/concept/ci-relationships.html: Application Service -[ Depends on::Used by ]-> Application Application -[ Runs on::Runs ]-> Infrastructure CIs If we're not using Service Mapping, the CI classes and relationships may differ. We'll create several sample CIs with appropriate relationships: Splunk::Application Service -[ Depends on::Used by ]-> Splunk Enterprise::Application Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-cm-1::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-idx-1::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-idx-2::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-idx-3::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-sh-1::Linux Server We'll start our search with the required relationships: index=snow sourcetype=snow:cmdb_rel_ci dv_type IN ("Depends on::Used by" "Runs on::Runs") earliest=0 latest=now If we have more than one ServiceNow instance, we can add endpoint=https://xxx to our searches, where xxx is the fully-qualified domain name of our instance. sourcetype=snow:cmdb_rel_ci includes the following fields of interest: sys_id parent dv_type child illustrated by: index=snow sourcetype=snow:cmdb_rel_ci dv_type="Depends on::Used by" earliest=0 latest=now | stats latest(parent) as parent latest(child) as child by sys_id Using sourcetype=snow:cmdb_ci_list and sourcetype=snow:cmdb_rel_ci, we can graph relationships using join: index=snow sourcetype=snow:cmdb_ci_list dv_sys_class_name="Mapped Application Service" name=Splunk earliest=0 latest=now | stats latest(name) as name by sys_id | rename name as service_name, sys_id as service_sys_id | join type=left max=0 service_sys_id [ search index=snow sourcetype=snow:cmdb_rel_ci dv_type="Depends on::Used by" earliest=0 latest=now | stats latest(parent) as service_sys_id latest(child) as application_sys_id by sys_id | fields service_sys_id application_sys_id ] | join type=left max=0 application_sys_id [ search index=snow sourcetype=snow:cmdb_ci_list earliest=0 latest=now | stats latest(name) as name by sys_id | rename name as application_name, sys_id as application_sys_id ] | join type=left max=0 application_sys_id [ search index=snow sourcetype=snow:cmdb_rel_ci dv_type="Runs on::Runs" earliest=0 latest=now | stats latest(parent) as application_sys_id latest(child) as server_sys_id by sys_id | fields application_sys_id server_sys_id ] | join type=left max=0 server_sys_id [ search index=snow sourcetype=snow:cmdb_ci_list earliest=0 latest=now | stats latest(name) as name by sys_id | rename name as server_name, sys_id as server_sys_id ] | stats values(server_name) as server_name by service_name We can add search predicates to the sourcetype=snow:cmdb_ci_list subsearches, e.g. dv_operational_status=Operational, to limit the CIs returned. Note that Splunk doesn't "know" if a CI is deleted. If we delete a CI or have multiple CIs with the same name but different sys_id values, invalid or duplicate CIs by name will appear in the search results. Given the searches above, we should highlight: 1) earliest=0 latest=now will return all currently available events. This is not only inefficient for a large number of static CIs or a moderate number of frequently updated CIs, it's also subject to the limits of our indexer cluster and index configurations: SmartStore cache may be exceeded, older CIs may be in frozen buckets, etc. 2) The join command can be inefficient and is subject to subsearch limits in limits.conf. What are the alternatives? We can refactor the searches using transaction, stats, etc. and creative logic, but we'll still be subject to index lifecycle limits and the frequency of CI updates. We can create KV store collections to store CIs, but do we want to clone our CMDB in both indexes and KV store collections? KV store collections also have limits. If we're in a Splunk Cloud environment, for example, increasing instance disk space to store large collections is a challenge. In my own work, I've replicated CMDB data to Neo4j and used Cypher to query and analyze CI relationships. You may be interested in the Common Metadata Data Model (CMDM) https://splunkbase.splunk.com/app/5508 app by @lekanneer. The app implements much of what's required to use Neo4j with Splunk.
What do you want from the alert?  What problem are you trying to solve?  Once we know the objective we can help you tune the alert. As it stands now, the alert is triggered for every PowerShell or c... See more...
What do you want from the alert?  What problem are you trying to solve?  Once we know the objective we can help you tune the alert. As it stands now, the alert is triggered for every PowerShell or command line process, anything launched by one of those processes, or any service.  That's a lot of processes, not all of which are interesting.