All Topics

Top

All Topics

I am trying to filter my search results where only a particular subset of the results should be shown. Example suppose if below is the intermediate search result.  MESSAGE: Records::0 MESSAGE: Reco... See more...
I am trying to filter my search results where only a particular subset of the results should be shown. Example suppose if below is the intermediate search result.  MESSAGE: Records::0 MESSAGE: Records::1 MESSAGE: Records::0 MESSAGE: Records::4 Final search results should contain only where the records are greater than 0. Is there any query which can help with this?
Hi,  I want to get rid of columns which have single unique value. There could be multiple columns showing this behavior.  Test Value1 Value2 Value3 Value4 Test1 2 b a 7 Test2 1 c... See more...
Hi,  I want to get rid of columns which have single unique value. There could be multiple columns showing this behavior.  Test Value1 Value2 Value3 Value4 Test1 2 b a 7 Test2 1 c a 7   I want to get rid of columns "Value3" and "Value4" since they have only one unique value across.   @gcusello @ITWhisperer @scelikok @PickleRick     
#mission_control, # splunk cloud Hi  In my org primarily Mission Control events are investigated by SOC as soon as they pop up, if futher investigation is needed the incident is escalated to Enterp... See more...
#mission_control, # splunk cloud Hi  In my org primarily Mission Control events are investigated by SOC as soon as they pop up, if futher investigation is needed the incident is escalated to Enterprise security TEAM who is responsible to perform deeper/detailed investigation and update back in Mission Control.  USE CASE:  The enterprise security manger wants a DASHBOARD which will inform him about :  if the investigation is being performed by his team (ES)> how much average time his team member takes to resolve an incident > averaged over a month.   For ES team I have lookup file or also I can type there name(Only 7-8 people) > I NEED A QUERY WHICH WILL EVALUATE WHEN assigne=(tom,tim,xyz) , difference between update_time & create_time , averaged out over month.  Field we have : | mcincidents   add_response_stats=true | eval create_time=strtime(create_time, "%m/%d%Y %I:%M:%S %p") | eval update_time=strtime(create_time, "%m/%d%Y %I:%M:%S %p") | table assigne, create_time, update_time, description, disposition, id, incident_type, name, sensitivity, source_type, summary
My company flagged redis being vulnerable to security because requirepass is not enabled. How do I enable it and give the password to the clients that connect to the redis?
How to display top 10 and replace the rest with others? I tried using   top limit 5 with userother, but the number didn't match and showed other fields like count, percent and _tc.  This is just ... See more...
How to display top 10 and replace the rest with others? I tried using   top limit 5 with userother, but the number didn't match and showed other fields like count, percent and _tc.  This is just an example.  I have a lot of fields and rows in real data  Thank  you for your help | addcoltotals labelfield=Name | top limit=5 userother=t Name Score ==> number didn't match Before Expense Name Score 1 Rent 2000 2 Car 1000 3 Insurance 700 4 Food 500 5 Education 400 6 Utility 200 7 Entertainment 100 8 Gym 70 9 Charity 50 10 Total 5020 After Expense Name Score 1 Rent 2000 2 Car 1000 3 Insurance 700 4 Food 500 5 Education 400 6 Others 420 7 Total 5020
Hi ,   I have two sets of JSON data. I want to find the keys which are unique in one dataset and also keys which are missing in the same in comparison with the other dataset. My first data set ... See more...
Hi ,   I have two sets of JSON data. I want to find the keys which are unique in one dataset and also keys which are missing in the same in comparison with the other dataset. My first data set looks as below :   { "iphone": { "price" : "50", "review" : "Good" }, "desktop": { "price" : "80", "review" : "OK" }, "laptop": { "price" : "90", "review" : "OK" } } My second data set looks as below : { "tv": { "price" : "50", "review" : "Good" }, "desktop": { "price" : "60", "review" : "OK" } } Therefore, for the first data set (w.r.t second data set): unique values will be :  iphone and laptop and missing values will be : tv  How can I find out this difference and show then in a table with columns like "uniq_value" and "missing_value" I could only write the query up to this , but this is half part and not what I want: index=product_db | |eval p_name=json_array_to_mv(json_keys(_raw)) |eval p_name = mvfilter(NOT match(p_name, "uploadedBy") AND NOT match(p_name, "time") | mvexpand p_name| table p_name Thanks
Would you kindly assist us in hiding the credit card number and expiration date for the following field some ab n required YES Accommodation [Bucharest] 5 Nights – Novotel Bucharest HDFC Master ca... See more...
Would you kindly assist us in hiding the credit card number and expiration date for the following field some ab n required YES Accommodation [Bucharest] 5 Nights – Novotel Bucharest HDFC Master card number 1234 4567 0009 2321 Expiry Date of HDFC card 01/26 Any other relevant info Thanks and Regards, Murali. From
Hi, I want to create a search query that looks for users who have received phishing emails, clicked the link, or downloaded a file from the email. Thanks
I am noob with Splunk. I am trying to join two indexes in one search - index="idx-enterprise-tools" sourcetype="spectrum:alarm:json" | eval Host=substr(host,1,9)   Second Index - index=idx-sec-c... See more...
I am noob with Splunk. I am trying to join two indexes in one search - index="idx-enterprise-tools" sourcetype="spectrum:alarm:json" | eval Host=substr(host,1,9)   Second Index - index=idx-sec-cloud sourcetype=rubrik:json NOT (summary="*on demand backup*" OR custom_details.clusterName="ART1RBRK100P" OR custom_details.clusterName="ONT1RBRK100P" OR custom_details.clusterName="GRO1RBRK100P") (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty" OR custom_details.errorId="Vmware.VmwareCBTCorruption")) OR (custom_details.eventName="Mssql.LogBackupFailed") OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas")) OR (custom_details.eventName="Vmware.VcenterRefreshFailed") OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed") OR (custom_details.eventName="Hawkeye.IndexRetryFailed") OR (custom_details.eventName="Storage.SystemStorageThreshold") OR (custom_details.eventName="ClusterOperation.DiskLost") OR (custom_details.eventName="ClusterOperation.DiskUnhealthy") OR (custom_details.eventName="Hardware.DimmError") OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement") OR (custom_details.location="*/MSSQLSERVER") | rename custom_details.eventName as EventName custom_details.errorId as ErrorCode custom_details.clusterName as ClusterName custom_details.location as LocationName | eventstats count(eval(custom_details.location="*/MSSQLSERVER")) as MsSqlServer by summary   I am trying like this but I do not see any events where as both the indexes are giving events for same time frame- index="idx-enterprise-tools" sourcetype="spectrum:alarm:json" | eval Host=substr(host,1,9) | join host [ search index=idx-sec-cloud sourcetype=rubrik:json NOT (summary="*on demand backup*" OR custom_details.clusterName="ART1RBRK100P" OR custom_details.clusterName="ONT1RBRK100P" OR custom_details.clusterName="GRO1RBRK100P") (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty" OR custom_details.errorId="Vmware.VmwareCBTCorruption")) OR (custom_details.eventName="Mssql.LogBackupFailed") OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas")) OR (custom_details.eventName="Vmware.VcenterRefreshFailed") OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed") OR (custom_details.eventName="Hawkeye.IndexRetryFailed") OR (custom_details.eventName="Storage.SystemStorageThreshold") OR (custom_details.eventName="ClusterOperation.DiskLost") OR (custom_details.eventName="ClusterOperation.DiskUnhealthy") OR (custom_details.eventName="Hardware.DimmError") OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement") OR (custom_details.location="*/MSSQLSERVER") ] | rename custom_details.eventName as EventName custom_details.errorId as ErrorCode custom_details.clusterName as ClusterName custom_details.location as LocationName | eventstats count(eval(custom_details.location="*/MSSQLSERVER")) as MsSqlServer by summary
Hi, I'm after some assistance. I am trying to capture the peak number of concurrent users in a single minute block using timecharts.  I can do this one minute blocks no problem.   Where this gets ... See more...
Hi, I'm after some assistance. I am trying to capture the peak number of concurrent users in a single minute block using timecharts.  I can do this one minute blocks no problem.   Where this gets complicated is that I have been given the requirement that I should be able to change the timechart span from 1m to 1h and identify the peak minute with the highest number of users within each hour (if hour selected) and display the number of users for that peak minute (frather than the peak number of users for that hour). Can anyone assist me with this, or advise if it's even possible? Thanks
Hi Team I'd like to know how to integrate Splunk with Jira, to send splunk alerts or raise an incidents/issue on Jira for each Splunk alert from Splunk Cloud/Splunk Enterprise. Is there any recommen... See more...
Hi Team I'd like to know how to integrate Splunk with Jira, to send splunk alerts or raise an incidents/issue on Jira for each Splunk alert from Splunk Cloud/Splunk Enterprise. Is there any recommended app or way for this integration? Best Regards
  Hello Splunker!! My Splunk Enterprise license expired on January 29th, and because of that, I have renewed the license. But I missed some events during the license expiration period. How can I ge... See more...
  Hello Splunker!! My Splunk Enterprise license expired on January 29th, and because of that, I have renewed the license. But I missed some events during the license expiration period. How can I get back missed events so they will show up in the below graph?      
I need to be able to add a tip when hovering over a single value viz. It's just a basic SVV, with code like this: { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_some_number" },... See more...
I need to be able to add a tip when hovering over a single value viz. It's just a basic SVV, with code like this: { "type": "splunk.singlevalue", "dataSources": { "primary": "ds_some_number" }, "title": "", "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)", "sparklineValues": "> primary | seriesByName('floor')", "unitPosition": "before", "unit": "$", "majorFontSize": 36, "backgroundColor": "transparent", "showSparklineTooltip": true }, "description": "", "context": {}, "showProgressBar": false, "showLastUpdated": false }   I need the tool tip to explain how the number is calculated to the user, so just text. It is to run in Splunk Cloud. Anybody got any insight, please?    And yes, I need this to be done in Dashboard Studio, so no need to spend your effort advising me to use simple xml dashboards!   
Hi,  Im trying to create a dashboard that easily presents api endpoint performance metrics  I am generating a summary index using the following search   index=my_index app_name="my_app" sourcet... See more...
Hi,  Im trying to create a dashboard that easily presents api endpoint performance metrics  I am generating a summary index using the following search   index=my_index app_name="my_app" sourcetype="aws:ecs" "line.logger"=USAGE_LOG | fields _time line.uri_path line.execution_time line.status line.clientId ``` use a regex to figure out the endpoint from the uri path``` | lookup endpoint_regex_lookup matchstring as line.uri_path OUTPUT app endpoint match | rename line.status as http_status, line.clientId as client_id | fillnull value="" http_status client_id | bin _time span=1m | sistats count as volume p50(line.execution_time) as P50 p90(line.execution_time) as P90 p95(line.execution_time) as P95 p99(line.execution_time) as P99 by _time app endpoint http_status client_id   and i can use searches like this    index=summary source=summary-my_app | timechart $t_span$ p50(line.execution_time) as P50 p90(line.execution_time) as P90 p95(line.execution_time) as P95 p99(line.execution_time) as P99 by endpoint | sort endpoint --- index=summary source=summary-my_app | timechart span=1m count by endpoint   so i can generate a dashboard using a trellis layout that maps the performance of our endpoints without having to hard-code a bunch of panels. im trying to add a chart that displays the http_status counts over time for each endpoint (similar to the latency chart). Ive tried a number of different things, but cant get it work. i know i cant use the following:    index=summary source=summary-my_app | timechart count by endpoint http_status   so thought the following might work:   index=summary source=summary-my_app | stats count by endpoint http_status _time   but this shows me the http_status counts on a single line rather than as seperate series. Does anyone know how i could get this work?      
A user wants to create a new field alias for a field that appears in two sourcetypes. How many field aliases need to be created?One or two It should be one.Answer says two.Explain
We are using Splunk Universal Forwarder (UF) to forward logs from a Windows server to a Splunk Heavy Forwarder (HF). However, when the Splunk HF receives logs of a specific type as multiline, an ... See more...
We are using Splunk Universal Forwarder (UF) to forward logs from a Windows server to a Splunk Heavy Forwarder (HF). However, when the Splunk HF receives logs of a specific type as multiline, an issue arises. In this case, when attempting to forward these logs from the Splunk HF to a syslog server (a Linux server with rsyslog configuration), the logs are getting truncated. How can we address and resolve this issue?
I want to write a query whose purpose is to print for users who are not authorized to enter, and of course with the presence of a lookup table, the people who are authorized to enter are present in it.
Hi at all, I encountered a strange behaviour in one Splunk infrastructure. We have two heavy Forwarders that concentrate on-premise logs and send them to Splunk Cloud. Form some days, one of them ... See more...
Hi at all, I encountered a strange behaviour in one Splunk infrastructure. We have two heavy Forwarders that concentrate on-premise logs and send them to Splunk Cloud. Form some days, one of them stopped to forwarder logs, also restarting Splunk. I found on both the HFs three new unknown folders: quarantined files, cmake, swidtag. In addition, sometimes also the other HF stops to forward logs and I have to restart it and the UFs, otherwise log collecting stopped. I knew thet an Indexer can be quarantined, also an Heavy Forwarder? How to unquarantine it? I opened a case to Splunk support, but in the meantime, Is there anyone that experienced a similar behavior? Thank you for your help. Ciao. Giuseppe
Hi I have installed a Splunk Forwarder on a remote computer and I chose wmi as data input in the main server. But when I want to find a log I get the message that remote computer is not reachable. T... See more...
Hi I have installed a Splunk Forwarder on a remote computer and I chose wmi as data input in the main server. But when I want to find a log I get the message that remote computer is not reachable. This is while I have defined firewall rules for Splunk dynamic ports. Would you please help me?
Hi,   I am trying to configure UF installed on windows machines to send logs to HF and then HF to forward these logs to indexer.   I found some questions but mostly they were very high level.   ... See more...
Hi,   I am trying to configure UF installed on windows machines to send logs to HF and then HF to forward these logs to indexer.   I found some questions but mostly they were very high level.   If someone can explain how will it work, that would be great.