All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @yuanliu    Thank you for the reply.  The table view is great. What I was trying to achieve is that to trigger an alert, for example, from the below table, if the latest event_id is "1545467"  c... See more...
Hi @yuanliu    Thank you for the reply.  The table view is great. What I was trying to achieve is that to trigger an alert, for example, from the below table, if the latest event_id is "1545467"  compared to last/previous event_id (which is also 1545467) for the same task_id, event_name for last 2 hours, then alert should be triggered. Since there is no change in the event_id, it should trigger an alert.   event_name task_id _time event_id server_state 0 2023-08-01 15:41:40.395,2023-08-01 15:10:40.395 1545467 1545467
Hi all, Can we list out alerts based on the host ips used in alert queries.  
Hi @Devi13, I suppose that at least you have the host where logs coming from and the sourcetype, in addition, can you say that the first event is "count=0" and the last event is "XXX Process"? if ... See more...
Hi @Devi13, I suppose that at least you have the host where logs coming from and the sourcetype, in addition, can you say that the first event is "count=0" and the last event is "XXX Process"? if this is true, this is one of the few situation to use the transaction command: index=your_index sourcetype=your_sourcetype ("count=0" OR "process started" OR "Process") | transaction host startswith="count=0" endswith="Process" | table Process count Ciao. Giuseppe    
Hi @aditsss , dedup your results: index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" |... See more...
Hi @aditsss , dedup your results: index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCMessage="ebnc event balanced successfully" | dedup EBNCMessage | table EBNCMessage True Ciao. Giuseppe
Hi @innoce, You don't need quotes id in the field names there isn't any space or special char. Anyway, good for you, see next time! let us know if we can help you more, or, please, accept one answ... See more...
Hi @innoce, You don't need quotes id in the field names there isn't any space or special char. Anyway, good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @GaetanVP, good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poi... See more...
Hi @GaetanVP, good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @mikefg  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
This is exactly what I needed. Thank you so much!
Hello All,  I am using splunk to store the logs in one of my projects. While I am using the developer org for my POC, everything working fine as per the expected, after POC, I am trying to use the c... See more...
Hello All,  I am using splunk to store the logs in one of my projects. While I am using the developer org for my POC, everything working fine as per the expected, after POC, I am trying to use the client splunk environment, I am facing some issue. like the logs are not captured in consistence way. For example, while calling my service on 5 times. sometime all the call and the logs are capturing, but sometime some logs are missing. Can someone help on this, is that environment issue or Splunk issue?   Thanks, Dinesh
Don't bother.  IN optimizes to a series of ORs so just start with that. index=syslog [ | tstats count from datamodel=Random by ips | rename ips as src_ip | fields src_ip | format ] The subsearch wi... See more...
Don't bother.  IN optimizes to a series of ORs so just start with that. index=syslog [ | tstats count from datamodel=Random by ips | rename ips as src_ip | fields src_ip | format ] The subsearch will run first and use the format command to produce a string like "(src_ip=1.2.3.4 OR src_ip=2.3.4.5)" which will become part of the main search.
Thanks! This works great unless the SAML role mapping has been deleted.
Hello, I am in need of finding an add on that would allow me to email a Dashboard on a Schedule.  The out of the box PDF creation doesn't meet our requirements sad to say.  I have been trying severa... See more...
Hello, I am in need of finding an add on that would allow me to email a Dashboard on a Schedule.  The out of the box PDF creation doesn't meet our requirements sad to say.  I have been trying several different add ons that may work out for us, but everyone seems to be either no longer able to use due to Splunk Enterprise not using Advanced XML or we receive an error. So far the best add on I found is "Smart PDF Exporter for Splunk"  But, when I try to schedule it, I get a JavaScript error due to a 404 error.  Does anyone know how to resolve this or any ideas on how else I can create a scheduled email with the contents of the Dashboard in the Body.  Doesn't matter what format, as long as we can see the Dashboard in the Body of the email. Thanks for any help on this one. Tom
I have a question about filtering in data. We have a customer who is requesting a set of fields to be sent in from 0365. The issue is, we cant modify what we pull in because we are using  an API, not... See more...
I have a question about filtering in data. We have a customer who is requesting a set of fields to be sent in from 0365. The issue is, we cant modify what we pull in because we are using  an API, not the universal forwarder. Currently I am trying to test out the search query to confirm that I am only pulling in the correct events with those fields.  The o365 data pulls in about 400+ fields. We are wanting about 40 of those events for a specific use case. My question is, what is the correct syntax for splunk to only search for those fields.  Original query that brings in about 400+ fields:   index=o365     New query for about 35 fields:   index=o365 "Operation"="*" OR "LabelAction"="*" OR "LabelAppliedDateTime"="*" OR "LabelIid"="*" OR "abelName"="*" OR "DlpAuditEventMetadata.DlpPolicyMatchId"="*" OR "DlpAuditEventMetadata.EvaluationTime"="*" OR "DlpOriginalFilePath"="*" OR "IrmContentId"="*" OR "PolicyMatchInfo.PolicyId"="*" OR "PolicyMatchInfo.PolicyName"="*" OR "PolicyMatchInfo.RuleId"="*" OR "PolicyMatchInfo.RuleName"="*" OR "ProtectionEventData.IsProtected"="*" OR "ProtectionEventData.IsProtectedBefore"="*" OR "ProtectionEventData.ProtectionEventType"="*" OR "ProtectionEventData.ProtectionOwner"="*" OR "ProtectionEventData.ProtectionType"="*" OR "ProtectionEventData.TemplateId"="*" OR "ProtectionEventType"="*" OR "RMSEncrypted"="*" OR "SensitiveInfoTypeData{}.Confidence"="*" OR "SensitiveInfoTypeData{}.Count"="*" OR "SensitiveInfoTypeData{}.SensitiveInfoTypeId"="*" OR "SensitiveInfoTypeData{}.SensitiveInfoTypeName"="*" OR "SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Confidence"="*" OR "SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Count"="*" OR "SensitivityLabelEventData.ActionSource"="*" OR "SensitivityLabelEventData.ActionSourceDetail"="*" OR "SensitivityLabelEventData.ContentType"="*" OR "SensitivityLabelEventData.JustificationText"="*" OR "SensitivityLabelEventData.LabelEventType"="*" OR "SensitivityLabelEventData.OldSensitivityLabelId"="*" OR "SensitivityLabelEventData.SensitivityLabelId"="*" OR "SensitivityLabelEventData.SensitivityLabelPolicyId"="*" OR "LabelName"="*" | fields Operation,LabelAction,LabelAppliedDateTime,LabelIid,abelName,DlpAuditEventMetadata.DlpPolicyMatchId,DlpAuditEventMetadata.EvaluationTime,DlpOriginalFilePath,IrmContentId,PolicyMatchInfo.PolicyId,PolicyMatchInfo.PolicyName,PolicyMatchInfo.RuleId,PolicyMatchInfo.RuleName,ProtectionEventData.IsProtected,ProtectionEventData.IsProtectedBefore,ProtectionEventData.ProtectionEventType,ProtectionEventData.ProtectionOwner,ProtectionEventData.ProtectionType,ProtectionEventData.TemplateId,ProtectionEventType,RMSEncrypted,SensitiveInfoTypeData{}.Confidence,SensitiveInfoTypeData{}.Count,SensitiveInfoTypeData{}.SensitiveInfoTypeId,SensitiveInfoTypeData{}.SensitiveInfoTypeName,SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Confidence,SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Count,SensitivityLabelEventData.ActionSource,SensitivityLabelEventData.ActionSourceDetail,SensitivityLabelEventData.ContentType,SensitivityLabelEventData.JustificationText,SensitivityLabelEventData.LabelEventType,SensitivityLabelEventData.OldSensitivityLabelId,SensitivityLabelEventData.SensitivityLabelId,SensitivityLabelEventData.SensitivityLabelPolicyId,LabelName     Basically, From my understanding and my research, if you just append a specific string in quotes, or outside of quotes, splunk searches all events for that string and pulls it in. Such as:   index=Test field1 field2 field3   That would bring in only events with field1 or field2 or field3 within it. Adding quotes to it, such as    index=Test "field1"="*" "field2"="*" "field3"="*"   Should filter the same way. I have tested it both way, with double quotes surrounding the field, as well as no quotes. Im also using | fields Which should only bring those fields in, but i dont know if its only showing those fields, but bringing in ALL of the events.    My question is, is this correct? With the base searches ive been testing with, searching all of the events in o365 for one day, full 24 hours, brings in  23,410,064 events. Filtering out with the query I pasted above, for the same day, same 24 hours, brings in 23,409,887 events. Ive tested this a couple of ways, and each time, searching over the same time period, the filtering query brings in about 1k less events. But I can still only view the first 1k events, 20 pages worth. But that may be another question.    My longwinded question boils down to, am I searching this data correctly? I know its a heavy index with millions of events, but filtering out to only 40 or so fields, some of which only appear .6% of the time, still brings in millions of events. Is there a way to fully validate it?   
This SPL will give you the failed saved searches: index=_audit sourcetype=audittrail TERM(action=search) (TERM(info=bad_request)) (TERM(search=*) OR TERM(savedsearch=*)) NOT (MongoModificationsTrack... See more...
This SPL will give you the failed saved searches: index=_audit sourcetype=audittrail TERM(action=search) (TERM(info=bad_request)) (TERM(search=*) OR TERM(savedsearch=*)) NOT (MongoModificationsTracker OR (INFO (metrics OR PeriodicHealthReporter OR LicenseUsage) OR StreamedSearch) OR TERM(info=granted) OR (TERM(info=completed) TERM(has_error_warn=false) TERM(fully_completed_search=true)) OR GET ) provenance=scheduler | rex mode=sed field=search "s/^'//" | rex mode=sed field=search "s/'$//" | rex mode=sed field=search_id "s/^'//" | rex mode=sed field=search_id "s/'$//" | table _time app info has_error_warn mode provenance savedsearch_name search search_id src user total_run_time
How do I use a search to generate values to use inside of an IN search? For example:     index=syslog src_ip IN ( | tstats count from datamodel=Random by ips | stats values(ips) as IP | eval IP = ... See more...
How do I use a search to generate values to use inside of an IN search? For example:     index=syslog src_ip IN ( | tstats count from datamodel=Random by ips | stats values(ips) as IP | eval IP = mvjoin(IP, ",")     I tried the method above but it's not working. Thank you!
Hello All, I am hoping for some guidance here. I am using Maps+. It seems to be a decent application. There are two things I want to do, ans so far no joy. 1. I want to be able to change the color... See more...
Hello All, I am hoping for some guidance here. I am using Maps+. It seems to be a decent application. There are two things I want to do, ans so far no joy. 1. I want to be able to change the color of the cluster circle. 2. I want to be able to "un-zoom" back to initial state after clicking on a cluster circle. My map show routers and switches, by lat/long that are UP or DOWN. I have no issues with markerType or color or icon style. I would like to change the cluster circle color for devices that are "down". Can I do this? or do I have to use a legacy map type. thanks so much, eholz
That's a lot of indexes - perhaps too many.  Having thousands of indexes means having tens (or hundreds) of thousands of buckets, which makes for a lot of files to open (subject to OS limits), decomp... See more...
That's a lot of indexes - perhaps too many.  Having thousands of indexes means having tens (or hundreds) of thousands of buckets, which makes for a lot of files to open (subject to OS limits), decompress, and read.  It increases the chances of having lots of little indexes (and buckets) that are more metadata than data, wasting resources. Splunk recommends putting data that is commonly used together in searches into the same index for more efficient searching. There are a few reasons for creating a new index: 1) Data has different access requirements/restrictions 2) Data has different retention requirements 3) Data is of such volume that it warrants a separate index.
How does the data get from the source file to Splunk?  If there are multiple readers then there could be duplicate data.
@GaetanVP could you please guide.
Are Tag1 and Tag2 in the same event?  If not, what field links the two events?  Where are you using the isnull() function?