All Topics

Top

All Topics

Hi, I need to collect the logs from Windows Defender and I was looking for an official app and I couldn't find one. I read some people recommending "TA for Microsoft Windows Defender" but I see tha... See more...
Hi, I need to collect the logs from Windows Defender and I was looking for an official app and I couldn't find one. I read some people recommending "TA for Microsoft Windows Defender" but I see that it didn't get update since 2017. Any other option more recent? thanks.
  HI , please help to get new field URI by using rex  /area/label/health/readiness||||||||||METRICS|--
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces... See more...
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces removed and formatting removed and the forwarder does not use the file anymore. The only fix I have found is to delete the app from the forwarder and wait for the deployment server to re-deploy it.
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2... See more...
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2023] [mpm_prefork:notice] [pid 12853] AH00163: Apache/2.4.57 (codeit) OpenSSL/3.0.10+quic configured -- resuming normal operations [Fri Sep 01 17:18:11.766641 2023] [core:notice] [pid 12853] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom [Fri Sep 01 17:18:40.794670 2023] [core:notice] [pid 12853] AH00052: child pid 12862 exit signal Aborted (6) [Fri Sep 01 17:18:40.794714 2023] [core:notice] [pid 12853] AH00052: child pid 12883 exit signal Aborted (6) terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=... See more...
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g" | sort RunStartTimeStamp asc, LogTimeStamp asc, LogID ASC       This works and gets the data I need for the error I am after, but, I want all associated values for the error by RunID. So the headers are: Host, InvocationID, Name, LogID, LogTS, LogName, MessageID, MessageText, RunID, RunTS, RunName I would like to do something like:     | inputlookup appJobLogs | where RunID in [ | search appJobLogs | where match(MessageText, "(?i)general error") | fields RunID ]     I have tried various forms and closest I got was a join which gave me the not found fields (should be fixable) but limited to 10,000 results so that seems like the wrong solution.
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake ... See more...
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake is not doing this on the UF. Sample data: (comes in via a curl command and writes out to a file)   timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name 2023-09-01T09:09:21.561936+00:00,beam.scs.splunk.com,AAAA,false,DNS-over-HTTPS,213.31.58.70,,,,splunk.com,8D512,"NUC10i5",,,,nextdns-cli 2023-09-01T09:09:09.154592+00:00,time.cloudflare.com,A,true,DNS-over-HTTPS,213.31.58.70,,,,cloudflare.com,14D3C,"NUC10i5",,,,nextdns-cli     UF (On syslog server) v8.1.0   props.conf [nextdns:dns] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER =, FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name TIMESTAMP_FIELDS = timestamp inputs.conf [monitor:///opt/remote-logs/nextdns/nextdns.log] index = nextdns sourcetype = nextdns:dns initCrcLength = 375     Indexer (SVA S1) v9.1.0 Disabled the options, I will apply Great8 once I have this fixed. All the work needs to happen on the UF.   [nextdns:dns] #INDEXED_EXTRACTIONS = CSV #HEADER_FIELD_LINE_NUMBER = 1 #HEADER_FIELD_DELIMITER =, #FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name #TIMESTAMP_FIELDS = timestamp      Challenge: I'm still getting the header field ingest I have deleted the indexed data, regenerated updated log, reingested and still issues. Obviously I have restarted splunk on each instance after respective changes.
Hello to all, i have the following Issue: I receive logs from an older machine for which I cannot adjust the logging settings. When extracting data in Splunk, I encounter the following field and so... See more...
Hello to all, i have the following Issue: I receive logs from an older machine for which I cannot adjust the logging settings. When extracting data in Splunk, I encounter the following field and some values: id = EF_jblo_fdsfew42_sla id = EF_space_332312_sla id = EF_97324_pewpew_sla with a field extraction I then get my location from the id. For example: id = EF_jblo_fdsfew42_sla         => location = jblo id = EF_space_332312_sla       => location = space id = EF_97324_pewpew_sla     => location = 97324 <- where this is not a location here.   Now, I aim to replace the location using an automatic lookup based on the ID "EF_97324_pewpew_sla." Unfortunately, I encounter an issue where I either retrieve only the location from the table, omitting the rest, or I only receive the values extracted from the field extraction. I've reviewed the search sequence as per the documentation, ensuring that field extraction precedes lookup. However, I'm perplexed as to why it consistently erases all the values rather than just overwriting a single one. Is there an automated solution running in the background, similar to automatic lookup, that could resolve this? Thought lookup: ID Solution EF_97324_pewpew_sla TSINOC   My original concept was as follows: Data is ingested into Splunk. Using field extraction to extract the location from the ID. For the IDs where I am aware that they do not contain any location information, I intend to replace the extracted value with the lookup data. I wanted to run the whole thing in the "background" so that the users do not have to run it as a search string. I also tried to use calculated fields  to build one from two fields, but since the calculation takes place before the lookup, this was unfortunately not possible. Hope someone can help me. Kind regards, Flenwy
Hi,  In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" ... See more...
Hi,  In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" Error in Dashboard:   The console in the developer details specifies a 404 Not Found Error on many scripts:  The same error is issued for also other js like PopTart.js or Base.js.  Searching for all these files in the Splunk folder I notice these scritpts are all stored in a folder called quarantined_files, an odd folder placed directly in the /opt/splunk/ path.  Any ideas on how to debug this error? 
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate fiel... See more...
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate field called Count. When I use "base search| table Count"  I am getting the expected value in a stats table But I want 2,5,45,23 to be plotted in the line graph. I tried stats commands but its only showing the no. of events of Count but not the values of count. Could you please provide your assistance on how can I plot the values of Count into a graph.
 We have a Splunk addon for AWS, under which we configured input cloud trail input type SQS-Based S3, Here we are not getting logs continuously. 
Hi All, We are want the run a modular input cleanup. What happens to the checkpoints? Will ingest from start from beginning again?   Thanks Nick
How to create empty.csv lookup in web
Hi  Very first question. I have created an inbuilt panel and I didn't want to hardcode the name in the ref parameter Splunk doesn't like the following any ideas? or advice? <init>   <set to... See more...
Hi  Very first question. I have created an inbuilt panel and I didn't want to hardcode the name in the ref parameter Splunk doesn't like the following any ideas? or advice? <init>   <set token="PageName">help_for_toy</set> </init>   <row>      <panel id="help" ref="$PageName$" ></panel>   </row>  
What is the equivalent of <html /> in Dashboard Studio?  I wanted to have some static links.
Hi all, Can we list out alerts based on the host ips used in alert queries.  
Hello All,  I am using splunk to store the logs in one of my projects. While I am using the developer org for my POC, everything working fine as per the expected, after POC, I am trying to use the c... See more...
Hello All,  I am using splunk to store the logs in one of my projects. While I am using the developer org for my POC, everything working fine as per the expected, after POC, I am trying to use the client splunk environment, I am facing some issue. like the logs are not captured in consistence way. For example, while calling my service on 5 times. sometime all the call and the logs are capturing, but sometime some logs are missing. Can someone help on this, is that environment issue or Splunk issue?   Thanks, Dinesh
Hello, I am in need of finding an add on that would allow me to email a Dashboard on a Schedule.  The out of the box PDF creation doesn't meet our requirements sad to say.  I have been trying severa... See more...
Hello, I am in need of finding an add on that would allow me to email a Dashboard on a Schedule.  The out of the box PDF creation doesn't meet our requirements sad to say.  I have been trying several different add ons that may work out for us, but everyone seems to be either no longer able to use due to Splunk Enterprise not using Advanced XML or we receive an error. So far the best add on I found is "Smart PDF Exporter for Splunk"  But, when I try to schedule it, I get a JavaScript error due to a 404 error.  Does anyone know how to resolve this or any ideas on how else I can create a scheduled email with the contents of the Dashboard in the Body.  Doesn't matter what format, as long as we can see the Dashboard in the Body of the email. Thanks for any help on this one. Tom
I have a question about filtering in data. We have a customer who is requesting a set of fields to be sent in from 0365. The issue is, we cant modify what we pull in because we are using  an API, not... See more...
I have a question about filtering in data. We have a customer who is requesting a set of fields to be sent in from 0365. The issue is, we cant modify what we pull in because we are using  an API, not the universal forwarder. Currently I am trying to test out the search query to confirm that I am only pulling in the correct events with those fields.  The o365 data pulls in about 400+ fields. We are wanting about 40 of those events for a specific use case. My question is, what is the correct syntax for splunk to only search for those fields.  Original query that brings in about 400+ fields:   index=o365     New query for about 35 fields:   index=o365 "Operation"="*" OR "LabelAction"="*" OR "LabelAppliedDateTime"="*" OR "LabelIid"="*" OR "abelName"="*" OR "DlpAuditEventMetadata.DlpPolicyMatchId"="*" OR "DlpAuditEventMetadata.EvaluationTime"="*" OR "DlpOriginalFilePath"="*" OR "IrmContentId"="*" OR "PolicyMatchInfo.PolicyId"="*" OR "PolicyMatchInfo.PolicyName"="*" OR "PolicyMatchInfo.RuleId"="*" OR "PolicyMatchInfo.RuleName"="*" OR "ProtectionEventData.IsProtected"="*" OR "ProtectionEventData.IsProtectedBefore"="*" OR "ProtectionEventData.ProtectionEventType"="*" OR "ProtectionEventData.ProtectionOwner"="*" OR "ProtectionEventData.ProtectionType"="*" OR "ProtectionEventData.TemplateId"="*" OR "ProtectionEventType"="*" OR "RMSEncrypted"="*" OR "SensitiveInfoTypeData{}.Confidence"="*" OR "SensitiveInfoTypeData{}.Count"="*" OR "SensitiveInfoTypeData{}.SensitiveInfoTypeId"="*" OR "SensitiveInfoTypeData{}.SensitiveInfoTypeName"="*" OR "SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Confidence"="*" OR "SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Count"="*" OR "SensitivityLabelEventData.ActionSource"="*" OR "SensitivityLabelEventData.ActionSourceDetail"="*" OR "SensitivityLabelEventData.ContentType"="*" OR "SensitivityLabelEventData.JustificationText"="*" OR "SensitivityLabelEventData.LabelEventType"="*" OR "SensitivityLabelEventData.OldSensitivityLabelId"="*" OR "SensitivityLabelEventData.SensitivityLabelId"="*" OR "SensitivityLabelEventData.SensitivityLabelPolicyId"="*" OR "LabelName"="*" | fields Operation,LabelAction,LabelAppliedDateTime,LabelIid,abelName,DlpAuditEventMetadata.DlpPolicyMatchId,DlpAuditEventMetadata.EvaluationTime,DlpOriginalFilePath,IrmContentId,PolicyMatchInfo.PolicyId,PolicyMatchInfo.PolicyName,PolicyMatchInfo.RuleId,PolicyMatchInfo.RuleName,ProtectionEventData.IsProtected,ProtectionEventData.IsProtectedBefore,ProtectionEventData.ProtectionEventType,ProtectionEventData.ProtectionOwner,ProtectionEventData.ProtectionType,ProtectionEventData.TemplateId,ProtectionEventType,RMSEncrypted,SensitiveInfoTypeData{}.Confidence,SensitiveInfoTypeData{}.Count,SensitiveInfoTypeData{}.SensitiveInfoTypeId,SensitiveInfoTypeData{}.SensitiveInfoTypeName,SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Confidence,SensitiveInfoTypeData{}.SensitiveInformationDetailedClassificationAttributes{}.Count,SensitivityLabelEventData.ActionSource,SensitivityLabelEventData.ActionSourceDetail,SensitivityLabelEventData.ContentType,SensitivityLabelEventData.JustificationText,SensitivityLabelEventData.LabelEventType,SensitivityLabelEventData.OldSensitivityLabelId,SensitivityLabelEventData.SensitivityLabelId,SensitivityLabelEventData.SensitivityLabelPolicyId,LabelName     Basically, From my understanding and my research, if you just append a specific string in quotes, or outside of quotes, splunk searches all events for that string and pulls it in. Such as:   index=Test field1 field2 field3   That would bring in only events with field1 or field2 or field3 within it. Adding quotes to it, such as    index=Test "field1"="*" "field2"="*" "field3"="*"   Should filter the same way. I have tested it both way, with double quotes surrounding the field, as well as no quotes. Im also using | fields Which should only bring those fields in, but i dont know if its only showing those fields, but bringing in ALL of the events.    My question is, is this correct? With the base searches ive been testing with, searching all of the events in o365 for one day, full 24 hours, brings in  23,410,064 events. Filtering out with the query I pasted above, for the same day, same 24 hours, brings in 23,409,887 events. Ive tested this a couple of ways, and each time, searching over the same time period, the filtering query brings in about 1k less events. But I can still only view the first 1k events, 20 pages worth. But that may be another question.    My longwinded question boils down to, am I searching this data correctly? I know its a heavy index with millions of events, but filtering out to only 40 or so fields, some of which only appear .6% of the time, still brings in millions of events. Is there a way to fully validate it?   
How do I use a search to generate values to use inside of an IN search? For example:     index=syslog src_ip IN ( | tstats count from datamodel=Random by ips | stats values(ips) as IP | eval IP = ... See more...
How do I use a search to generate values to use inside of an IN search? For example:     index=syslog src_ip IN ( | tstats count from datamodel=Random by ips | stats values(ips) as IP | eval IP = mvjoin(IP, ",")     I tried the method above but it's not working. Thank you!
Hello All, I am hoping for some guidance here. I am using Maps+. It seems to be a decent application. There are two things I want to do, ans so far no joy. 1. I want to be able to change the color... See more...
Hello All, I am hoping for some guidance here. I am using Maps+. It seems to be a decent application. There are two things I want to do, ans so far no joy. 1. I want to be able to change the color of the cluster circle. 2. I want to be able to "un-zoom" back to initial state after clicking on a cluster circle. My map show routers and switches, by lat/long that are UP or DOWN. I have no issues with markerType or color or icon style. I would like to change the cluster circle color for devices that are "down". Can I do this? or do I have to use a legacy map type. thanks so much, eholz