All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What do you want from the alert?  What problem are you trying to solve?  Once we know the objective we can help you tune the alert. As it stands now, the alert is triggered for every PowerShell or c... See more...
What do you want from the alert?  What problem are you trying to solve?  Once we know the objective we can help you tune the alert. As it stands now, the alert is triggered for every PowerShell or command line process, anything launched by one of those processes, or any service.  That's a lot of processes, not all of which are interesting.
You can add anything you like to any perfmon stanza, but that doesn't mean it will work.  Only the documented (in Settings->Data input->Local performance monitoring) counters will work.
If the users' LDAP names do not match their Splunk account names then all KOs will have to be reassigned to the LDAP account names.
Hi, Could anyone please help me in fine tuning this search as it is raising lot of alerts | tstats count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Proc... See more...
Hi, Could anyone please help me in fine tuning this search as it is raising lot of alerts | tstats count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Processes.parent_process_name=wmiprvse.exe OR Processes.parent_process_name=services.exe OR Processes.parent_process_name=svchost.exe OR Processes.parent_process_name=wsmprovhost.exe OR Processes.parent_process_name=mmc.exe) (Processes.process_name=powershell.exe OR (Processes.process_name=cmd.exe AND Processes.process=*powershell.exe*) OR Processes.process_name=pwsh.exe OR (Processes.process_name=cmd.exe AND Processes.process=*pwsh.exe*)) by Processes.dest Processes.user Processes.parent_process_name Processes.process_name Processes.process Processes.process_id Processes.parent_process_id | rename Processes.* as * | eval firstTime = strftime(firstTime, "%F %T") | eval lastTime = strftime(lastTime, "%F %T") thanks
Hi @Keerthi, you have to dedup for the firld that you display, in the first search you dedup for two fields. so it could be possible that you have duplicated values for the displayed field. Insteda... See more...
Hi @Keerthi, you have to dedup for the firld that you display, in the first search you dedup for two fields. so it could be possible that you have duplicated values for the displayed field. Insteda in the second search, I don't see any dedup command. Add a dedup row dedupping for the field to display. Ciao. Giuseppe
Hi @Rao_KGY , there's a difference in the first row of the searches: in the first search you have: com.thehartford.pl.model.exception.CscServiceException: in the second one, you have: com.thehartf... See more...
Hi @Rao_KGY , there's a difference in the first row of the searches: in the first search you have: com.thehartford.pl.model.exception.CscServiceException: in the second one, you have: com.thehartford.pl.model.exception.CscService Exception: in other words, there's an additional space in the second search, maybe this is the reason. Ciao. Giuseppe
@gcusello even timechart command also not working for me. JFYI I did try to create new dashboard by adding these queries to panel but I was getting same error.  I have created multiple dashboard b... See more...
@gcusello even timechart command also not working for me. JFYI I did try to create new dashboard by adding these queries to panel but I was getting same error.  I have created multiple dashboard before but never faced such issue.  Following is the source code of errored dashboard.  <dashboard version="1.1" theme="light"> <label>CSC Impacted Services Health</label> <row> <panel> <title>Build Profile Failures-CIAM Service Impact</title> <chart> <search> <query>index=app_pl Appid-APP-3515 Environment="PROD" "com.thehartford.pl.model.exception.CscServiceException: null at com.thehartford.pl.rest.UserProfileController.builduserProfile" | timechart count As Failure span=1h</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_self">search?q=index%3Dapp_p1%20Appid%3DAPP-3515%20Environment%3D%22PROD%22%20%22com.thehartford.pl.mo .rest.UserProfileController.buildUserProfile%22%0A%7Crex%20%22CIAMSESSION%20%3A%20(%3F%3Cciamsession%3E%5B%5Cw%5Cs% (%3F%3Cuserid%3E%5B%5C%5C%40%5C.%5D%2B) %22%0A%7C%20bin%20span%3D1h%20_time%20%0A%60%60%60%7C%20table%20ciamsession me&amp; earliest-$field1.earliest$&amp; latest-$field1.latest$</link> </drilldown> </chart> </panel> <panel> <title>Build Profile Failures- CIAM Service Impact</title> <table> <search> <query>index=app_pl Appid=APP-3515 Environment="PROD" "com.thehartford.pl.model.exception.CscService Exception: null at com.thehartford.pl.rest.UserProfileController.buildUserProfile" | bin span=1h _time l stats count as Failure by _time</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </row> </dashboard> </search> <option name="count">50</option> <option name="dataoverlayMode">none</option> <option name="drilldown">none</option>. <option name="percentages Row">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel>
@richgalloway Thanks for your suggestion. Does the creation or mapping of the existing users with LDAP will impact on existing reports , dashboards, macros etc created by different users ?
sure , this is what i see in search string inside. pls refer screenshot. pls let me know if you need more information?
Hi @snowee, for my knowledge, splunk doesen't restart itself, probably you're watching a fork of a running process. Anyway, open a case to Splunk Support. Ciao. Giuseppe
Hi @Keerthi, in the results of the search you are using to populate the dropdown, there are some duplicated values, so you have to dedup your results fo the field that you're using for displaying. ... See more...
Hi @Keerthi, in the results of the search you are using to populate the dropdown, there are some duplicated values, so you have to dedup your results fo the field that you're using for displaying. If you could share your dropdown search I could be more detailed. Ciao. Giuseppe
hello, I have installed splunk on my server and I found many process of splunkd restart, and consume much memory. How I fix that?  
Hi, I dont understand the issue . the drop down filter is showing duplicate issues. can anyone pls help me how to resolve?  
Hello team Below are my splunk logs: { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 8853b73ffef1c5522b4a383c286c8... See more...
Hello team Below are my splunk logs: { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 8853b73ffef1c5522b4a383c286c825e log_type: kong query_string: - remote_addr: 10.138.100.153 request_id: 93258e0bc529fa9844e0fd2d69168d0f request_length: 1350 request_method: GET request_time: 0.162 scheme: https server_addr: 10.138.100.151 server_protocol: HTTP/1.1 status: 499 time_local: 25/Feb/2024:05:11:24 +0000 upstream_addr: 10.138.103.157:8080 upstream_host: nice_host upstream_response_time: 0.000 uri: /v1/d5a413b6-7d00-4874-b706-17b15b7a140b }   { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 89cea871feba9f2d5216856f7a884223 log_type: kong query_string: productType=ALL remote_addr: 10.138.100.214 request_id: 9dbf69defb49a3595cf1040e6ab5d4f2 request_length: 1366 request_method: GET request_time: 0.167 scheme: https server_addr: 10.138.100.151 server_protocol: HTTP/1.1 status: 499 time_local: 25/Feb/2024:05:11:24 +0000 upstream_addr: 10.138.98.140:8080 upstream_host: nice_host upstream_response_time: 0.000 uri: /v1/a8b7570f-d0af-4d0d-bd6d-f6cf31892267 } From the above, I want to extract the request_time and upstream_response_time from the log event for the uri "/v1/*" which has query_string is empty(-) I tried the below search query, but it returns result containing query_string as empty and with values(productType=ALL) index="my_indexx" | spath host | search host="nice_host" | eval Operations=case( searchmatch("GET query_string: - /v1/*"),"getCart") | stats avg(request_time) as avg_request_time avg(upstream_response_time) as avg_upstreamTime perc90(request_time) as 90_request_time perc90(upstream_response_time) as 90_upstreamResponseTime by Operations | eval avg_request_time=round(avg_request_time,2) | eval avg_upstreamTime=round(avg_upstreamTime,2) index="ek_cloud_k8sdta_digital_platforms_kong" | spath host | search host="shopping-carts-service-oxygen-dev.apps.stg01.digitalplatforms.aws.emirates.dev" | eval Operations=case( match(_raw, "/v1/[^/ ?]"),"getCart") | stats avg(request_time) as avg_request_time avg(upstream_response_time) as avg_upstreamTime perc90(request_time) as 90_request_time perc90(upstream_response_time) as 90_upstreamResponseTime by Operations | eval avg_request_time=round(avg_request_time,2) | eval avg_upstreamTime=round(avg_upstreamTime,2) Can someone help on this.
Hi @richgalloway, Can we add "% Committed Bytes In Use" to Perfmon : Process?? Because I could see % Committed Byes In Use counter in Perfmon : Memory I already tried to add that counter to ... See more...
Hi @richgalloway, Can we add "% Committed Bytes In Use" to Perfmon : Process?? Because I could see % Committed Byes In Use counter in Perfmon : Memory I already tried to add that counter to Process source but no luck... is there any way to add?? Thanks
Hi @Rao_KGY , could you share your dashboard code? maybe there something else. Ciao. Giuseppe
Hi @sigma , as @richgalloway said, on Linux usually Splunk is installed on /opt and it's a best practice to ha file system separated from root and this location is configured in an enviromental vari... See more...
Hi @sigma , as @richgalloway said, on Linux usually Splunk is installed on /opt and it's a best practice to ha file system separated from root and this location is configured in an enviromental variable called %SPLUNK_HOME. For data it's possible to setup a variable (called $SPLUNK_DB) that indicates the location of the file system containing the data folders. not the $SPLUNK_HOME/var folder, that's a best practice to set up in a different and larger file system. So you can go in $SPLUNK_HOME/etc/splunk-launch.conf and configure $SPLUNK_HOME variable for your system. Obviously this action is only for Indexers or stand-alone Splunk systems, not for the other roles. Ciao. Giuseppe
Hi @richgalloway , Thank you for the support. Thanks
Hey @gcusello @ITWhisperer  Thanks for the information. JFYI I'm using same timeframe (i.e. 24Hrs ) for both the panel & span is also same 1hr.  @gcusello as per your suggestion I tried "timechart" ... See more...
Hey @gcusello @ITWhisperer  Thanks for the information. JFYI I'm using same timeframe (i.e. 24Hrs ) for both the panel & span is also same 1hr.  @gcusello as per your suggestion I tried "timechart" but again same issue, "NO Result Found". But same query is working fine while putting it in separate search.  And you're right I shouldn't use "table" command but since nothing was working so just for workaround I tried to use it. 
You can change _time (or any field) in a query, but it doesn't change the indexed data (nothing does).