All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In a drilldown, I have 2 possible queries and they look like: qry1=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( mid="$token_mid$" OR "MID $token_mid$"... See more...
In a drilldown, I have 2 possible queries and they look like: qry1=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( mid="$token_mid$" OR "MID $token_mid$") qry2=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND (icid="$token_icid$" OR mid="$token_mid$" OR "MID $token_mid$") if "$token_icid$==0 execute qry1 else execute qry2 How it can be achieve ? Chatgtp give this answer but not working index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( (($token_icid$=="0") AND (mid="$token_mid$")) OR (($token_icid$!="0") AND (icid="$token_icid$")) OR mid="$token_mid$" OR "MID $token_mid$" )
That space is not the issue @gcusello . Mistake happened since I took pic of source code, then extracted text from those pics via open source website & later pasted here so during this process only t... See more...
That space is not the issue @gcusello . Mistake happened since I took pic of source code, then extracted text from those pics via open source website & later pasted here so during this process only that mistake occurred. Thanks    
@richgalloway  This is to identify possible lateral movement attacks that involve the spawning of a PowerShell process as a child or grandchild process of commonly abused processes. These processes ... See more...
@richgalloway  This is to identify possible lateral movement attacks that involve the spawning of a PowerShell process as a child or grandchild process of commonly abused processes. These processes include services.exe, wmiprsve.exe, svchost.exe, wsmprovhost.exe, and mmc.exe.   Such behavior is indicative of legitimate Windows features such as the Service Control Manager, Windows Management Instrumentation, Task Scheduler, Windows Remote Management, and the DCOM protocol being abused to start a process on a remote endpoint. This behavior is often seen during lateral movement techniques where adversaries or red teams abuse these services for lateral movement and remote code execution. thanks    
Hi, Can you provide a sample of the raw data? It's probably JSON assuming what you've posted is from Splunk's "List" view. The spath command in your search also expects _raw (by default) to be JSON.... See more...
Hi, Can you provide a sample of the raw data? It's probably JSON assuming what you've posted is from Splunk's "List" view. The spath command in your search also expects _raw (by default) to be JSON. If that's the case, the fields aren't empty. They have a literal hyphen as their value. For example: {"body_bytes_sent": "0", "bytes_sent": "0", "host": "nice_host", "http_content_type": "-", "http_referer": "-", "http_user_agent": "-", "kong_request_id": "8853b73ffef1c5522b4a383c286c825e", "log_type": "kong", "query_string": "-", "remote_addr": "10.138.100.153", "request_id": "93258e0bc529fa9844e0fd2d69168d0f", "request_length": "1350", "request_method": "GET", "request_time": "0.162", "scheme": "https", "server_addr": "10.138.100.151", "server_protocol": "HTTP/1.1", "status": "499", "time_local": "25/Feb/2024:05:11:24 +0000", "upstream_addr": "10.138.103.157:8080", "upstream_host": "nice_host", "upstream_response_time": "0.000", "uri": "/v1/d5a413b6-7d00-4874-b706-17b15b7a140b"} {"body_bytes_sent": "0", "bytes_sent": "0", "host": "nice_host", "http_content_type": "-", "http_referer": "-", "http_user_agent": "-", "kong_request_id": "89cea871feba9f2d5216856f7a884223", "log_type": "kong", "query_string": "productType=ALL", "remote_addr": "10.138.100.214", "request_id": "9dbf69defb49a3595cf1040e6ab5d4f2", "request_length": "1366", "request_method": "GET", "request_time": "0.167", "scheme": "https", "server_addr": "10.138.100.151", "server_protocol": "HTTP/1.1", "status": "499", "time_local": "25/Feb/2024:05:11:24 +0000", "upstream_addr": "10.138.98.140:8080", "upstream_host": "nice_host", "upstream_response_time": "0.000", "uri": "/v1/a8b7570f-d0af-4d0d-bd6d-f6cf31892267"} You can search for the literal value directly: query_string=- or query_string="-" There is a caveat: the hyphen is a minor breaker and isn't indexed by Splunk as a term. All events will be returned initially, the query_string field will be extracted, and its value will be scanned for a hyphen to filter results. If your JSON fields aren't auto-extracted, we can investigate your inputs.conf and props.conf settings.
Hi, We can use the cmdb_ci and cmdb_rel_ci tables to analyze CI relationships. For this example, we'll use Splunk Add-on for ServiceNow 7.7.0 with the cmdb_ci and cmdb_rel_ci inputs configured and e... See more...
Hi, We can use the cmdb_ci and cmdb_rel_ci tables to analyze CI relationships. For this example, we'll use Splunk Add-on for ServiceNow 7.7.0 with the cmdb_ci and cmdb_rel_ci inputs configured and enabled. The number and types of relationships will vary depending on our model. We'll use the relationships described in the ServiceNow Common Service Data Model at https://docs.servicenow.com/bundle/washingtondc-servicenow-platform/page/product/csdm-implementation/concept/ci-relationships.html: Application Service -[ Depends on::Used by ]-> Application Application -[ Runs on::Runs ]-> Infrastructure CIs If we're not using Service Mapping, the CI classes and relationships may differ. We'll create several sample CIs with appropriate relationships: Splunk::Application Service -[ Depends on::Used by ]-> Splunk Enterprise::Application Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-cm-1::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-idx-1::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-idx-2::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-idx-3::Linux Server Splunk Enterprise::Application -[ Runs on::Runs ]-> splunk-sh-1::Linux Server We'll start our search with the required relationships: index=snow sourcetype=snow:cmdb_rel_ci dv_type IN ("Depends on::Used by" "Runs on::Runs") earliest=0 latest=now If we have more than one ServiceNow instance, we can add endpoint=https://xxx to our searches, where xxx is the fully-qualified domain name of our instance. sourcetype=snow:cmdb_rel_ci includes the following fields of interest: sys_id parent dv_type child illustrated by: index=snow sourcetype=snow:cmdb_rel_ci dv_type="Depends on::Used by" earliest=0 latest=now | stats latest(parent) as parent latest(child) as child by sys_id Using sourcetype=snow:cmdb_ci_list and sourcetype=snow:cmdb_rel_ci, we can graph relationships using join: index=snow sourcetype=snow:cmdb_ci_list dv_sys_class_name="Mapped Application Service" name=Splunk earliest=0 latest=now | stats latest(name) as name by sys_id | rename name as service_name, sys_id as service_sys_id | join type=left max=0 service_sys_id [ search index=snow sourcetype=snow:cmdb_rel_ci dv_type="Depends on::Used by" earliest=0 latest=now | stats latest(parent) as service_sys_id latest(child) as application_sys_id by sys_id | fields service_sys_id application_sys_id ] | join type=left max=0 application_sys_id [ search index=snow sourcetype=snow:cmdb_ci_list earliest=0 latest=now | stats latest(name) as name by sys_id | rename name as application_name, sys_id as application_sys_id ] | join type=left max=0 application_sys_id [ search index=snow sourcetype=snow:cmdb_rel_ci dv_type="Runs on::Runs" earliest=0 latest=now | stats latest(parent) as application_sys_id latest(child) as server_sys_id by sys_id | fields application_sys_id server_sys_id ] | join type=left max=0 server_sys_id [ search index=snow sourcetype=snow:cmdb_ci_list earliest=0 latest=now | stats latest(name) as name by sys_id | rename name as server_name, sys_id as server_sys_id ] | stats values(server_name) as server_name by service_name We can add search predicates to the sourcetype=snow:cmdb_ci_list subsearches, e.g. dv_operational_status=Operational, to limit the CIs returned. Note that Splunk doesn't "know" if a CI is deleted. If we delete a CI or have multiple CIs with the same name but different sys_id values, invalid or duplicate CIs by name will appear in the search results. Given the searches above, we should highlight: 1) earliest=0 latest=now will return all currently available events. This is not only inefficient for a large number of static CIs or a moderate number of frequently updated CIs, it's also subject to the limits of our indexer cluster and index configurations: SmartStore cache may be exceeded, older CIs may be in frozen buckets, etc. 2) The join command can be inefficient and is subject to subsearch limits in limits.conf. What are the alternatives? We can refactor the searches using transaction, stats, etc. and creative logic, but we'll still be subject to index lifecycle limits and the frequency of CI updates. We can create KV store collections to store CIs, but do we want to clone our CMDB in both indexes and KV store collections? KV store collections also have limits. If we're in a Splunk Cloud environment, for example, increasing instance disk space to store large collections is a challenge. In my own work, I've replicated CMDB data to Neo4j and used Cypher to query and analyze CI relationships. You may be interested in the Common Metadata Data Model (CMDM) https://splunkbase.splunk.com/app/5508 app by @lekanneer. The app implements much of what's required to use Neo4j with Splunk.
What do you want from the alert?  What problem are you trying to solve?  Once we know the objective we can help you tune the alert. As it stands now, the alert is triggered for every PowerShell or c... See more...
What do you want from the alert?  What problem are you trying to solve?  Once we know the objective we can help you tune the alert. As it stands now, the alert is triggered for every PowerShell or command line process, anything launched by one of those processes, or any service.  That's a lot of processes, not all of which are interesting.
You can add anything you like to any perfmon stanza, but that doesn't mean it will work.  Only the documented (in Settings->Data input->Local performance monitoring) counters will work.
If the users' LDAP names do not match their Splunk account names then all KOs will have to be reassigned to the LDAP account names.
Hi, Could anyone please help me in fine tuning this search as it is raising lot of alerts | tstats count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Proc... See more...
Hi, Could anyone please help me in fine tuning this search as it is raising lot of alerts | tstats count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Processes.parent_process_name=wmiprvse.exe OR Processes.parent_process_name=services.exe OR Processes.parent_process_name=svchost.exe OR Processes.parent_process_name=wsmprovhost.exe OR Processes.parent_process_name=mmc.exe) (Processes.process_name=powershell.exe OR (Processes.process_name=cmd.exe AND Processes.process=*powershell.exe*) OR Processes.process_name=pwsh.exe OR (Processes.process_name=cmd.exe AND Processes.process=*pwsh.exe*)) by Processes.dest Processes.user Processes.parent_process_name Processes.process_name Processes.process Processes.process_id Processes.parent_process_id | rename Processes.* as * | eval firstTime = strftime(firstTime, "%F %T") | eval lastTime = strftime(lastTime, "%F %T") thanks
Hi @Keerthi, you have to dedup for the firld that you display, in the first search you dedup for two fields. so it could be possible that you have duplicated values for the displayed field. Insteda... See more...
Hi @Keerthi, you have to dedup for the firld that you display, in the first search you dedup for two fields. so it could be possible that you have duplicated values for the displayed field. Insteda in the second search, I don't see any dedup command. Add a dedup row dedupping for the field to display. Ciao. Giuseppe
Hi @Rao_KGY , there's a difference in the first row of the searches: in the first search you have: com.thehartford.pl.model.exception.CscServiceException: in the second one, you have: com.thehartf... See more...
Hi @Rao_KGY , there's a difference in the first row of the searches: in the first search you have: com.thehartford.pl.model.exception.CscServiceException: in the second one, you have: com.thehartford.pl.model.exception.CscService Exception: in other words, there's an additional space in the second search, maybe this is the reason. Ciao. Giuseppe
@gcusello even timechart command also not working for me. JFYI I did try to create new dashboard by adding these queries to panel but I was getting same error.  I have created multiple dashboard b... See more...
@gcusello even timechart command also not working for me. JFYI I did try to create new dashboard by adding these queries to panel but I was getting same error.  I have created multiple dashboard before but never faced such issue.  Following is the source code of errored dashboard.  <dashboard version="1.1" theme="light"> <label>CSC Impacted Services Health</label> <row> <panel> <title>Build Profile Failures-CIAM Service Impact</title> <chart> <search> <query>index=app_pl Appid-APP-3515 Environment="PROD" "com.thehartford.pl.model.exception.CscServiceException: null at com.thehartford.pl.rest.UserProfileController.builduserProfile" | timechart count As Failure span=1h</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_self">search?q=index%3Dapp_p1%20Appid%3DAPP-3515%20Environment%3D%22PROD%22%20%22com.thehartford.pl.mo .rest.UserProfileController.buildUserProfile%22%0A%7Crex%20%22CIAMSESSION%20%3A%20(%3F%3Cciamsession%3E%5B%5Cw%5Cs% (%3F%3Cuserid%3E%5B%5C%5C%40%5C.%5D%2B) %22%0A%7C%20bin%20span%3D1h%20_time%20%0A%60%60%60%7C%20table%20ciamsession me&amp; earliest-$field1.earliest$&amp; latest-$field1.latest$</link> </drilldown> </chart> </panel> <panel> <title>Build Profile Failures- CIAM Service Impact</title> <table> <search> <query>index=app_pl Appid=APP-3515 Environment="PROD" "com.thehartford.pl.model.exception.CscService Exception: null at com.thehartford.pl.rest.UserProfileController.buildUserProfile" | bin span=1h _time l stats count as Failure by _time</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </row> </dashboard> </search> <option name="count">50</option> <option name="dataoverlayMode">none</option> <option name="drilldown">none</option>. <option name="percentages Row">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel>
@richgalloway Thanks for your suggestion. Does the creation or mapping of the existing users with LDAP will impact on existing reports , dashboards, macros etc created by different users ?
sure , this is what i see in search string inside. pls refer screenshot. pls let me know if you need more information?
Hi @snowee, for my knowledge, splunk doesen't restart itself, probably you're watching a fork of a running process. Anyway, open a case to Splunk Support. Ciao. Giuseppe
Hi @Keerthi, in the results of the search you are using to populate the dropdown, there are some duplicated values, so you have to dedup your results fo the field that you're using for displaying. ... See more...
Hi @Keerthi, in the results of the search you are using to populate the dropdown, there are some duplicated values, so you have to dedup your results fo the field that you're using for displaying. If you could share your dropdown search I could be more detailed. Ciao. Giuseppe
hello, I have installed splunk on my server and I found many process of splunkd restart, and consume much memory. How I fix that?  
Hi, I dont understand the issue . the drop down filter is showing duplicate issues. can anyone pls help me how to resolve?  
Hello team Below are my splunk logs: { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 8853b73ffef1c5522b4a383c286c8... See more...
Hello team Below are my splunk logs: { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 8853b73ffef1c5522b4a383c286c825e log_type: kong query_string: - remote_addr: 10.138.100.153 request_id: 93258e0bc529fa9844e0fd2d69168d0f request_length: 1350 request_method: GET request_time: 0.162 scheme: https server_addr: 10.138.100.151 server_protocol: HTTP/1.1 status: 499 time_local: 25/Feb/2024:05:11:24 +0000 upstream_addr: 10.138.103.157:8080 upstream_host: nice_host upstream_response_time: 0.000 uri: /v1/d5a413b6-7d00-4874-b706-17b15b7a140b }   { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 89cea871feba9f2d5216856f7a884223 log_type: kong query_string: productType=ALL remote_addr: 10.138.100.214 request_id: 9dbf69defb49a3595cf1040e6ab5d4f2 request_length: 1366 request_method: GET request_time: 0.167 scheme: https server_addr: 10.138.100.151 server_protocol: HTTP/1.1 status: 499 time_local: 25/Feb/2024:05:11:24 +0000 upstream_addr: 10.138.98.140:8080 upstream_host: nice_host upstream_response_time: 0.000 uri: /v1/a8b7570f-d0af-4d0d-bd6d-f6cf31892267 } From the above, I want to extract the request_time and upstream_response_time from the log event for the uri "/v1/*" which has query_string is empty(-) I tried the below search query, but it returns result containing query_string as empty and with values(productType=ALL) index="my_indexx" | spath host | search host="nice_host" | eval Operations=case( searchmatch("GET query_string: - /v1/*"),"getCart") | stats avg(request_time) as avg_request_time avg(upstream_response_time) as avg_upstreamTime perc90(request_time) as 90_request_time perc90(upstream_response_time) as 90_upstreamResponseTime by Operations | eval avg_request_time=round(avg_request_time,2) | eval avg_upstreamTime=round(avg_upstreamTime,2) index="ek_cloud_k8sdta_digital_platforms_kong" | spath host | search host="shopping-carts-service-oxygen-dev.apps.stg01.digitalplatforms.aws.emirates.dev" | eval Operations=case( match(_raw, "/v1/[^/ ?]"),"getCart") | stats avg(request_time) as avg_request_time avg(upstream_response_time) as avg_upstreamTime perc90(request_time) as 90_request_time perc90(upstream_response_time) as 90_upstreamResponseTime by Operations | eval avg_request_time=round(avg_request_time,2) | eval avg_upstreamTime=round(avg_upstreamTime,2) Can someone help on this.
Hi @richgalloway, Can we add "% Committed Bytes In Use" to Perfmon : Process?? Because I could see % Committed Byes In Use counter in Perfmon : Memory I already tried to add that counter to ... See more...
Hi @richgalloway, Can we add "% Committed Bytes In Use" to Perfmon : Process?? Because I could see % Committed Byes In Use counter in Perfmon : Memory I already tried to add that counter to Process source but no luck... is there any way to add?? Thanks