All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you mean rendering it that way on the Incident Review page, that is because that part of the page isn't expecting HTML, only raw data. To achieve this, you would need to use custom .js, along wit... See more...
If you mean rendering it that way on the Incident Review page, that is because that part of the page isn't expecting HTML, only raw data. To achieve this, you would need to use custom .js, along with a clone of the incident review page (if you are on an on-prem instance, and you cannot if you are on cloud I believe) to basically remap those fields for all notables.
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions y... See more...
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions you may have. Thank you in advance for your help. Source=bluecoat
Are you asking if Splunk Observability Cloud supports Windows IIS telemetry logging, or if it supports logging for your specific PHP webshop? If the latter, then it would be helpful to know the name ... See more...
Are you asking if Splunk Observability Cloud supports Windows IIS telemetry logging, or if it supports logging for your specific PHP webshop? If the latter, then it would be helpful to know the name of the PHP webshop.
If I understand correctly, you have two different log types ABC and EFG in the same index, and you want to count how many success, fail, and error events occur, but only for correlation IDs that occu... See more...
If I understand correctly, you have two different log types ABC and EFG in the same index, and you want to count how many success, fail, and error events occur, but only for correlation IDs that occur in both ABC and EFG? Assuming the field names are correct, your current query should work to count success, fail, and error events from both, though it will count events that only occur in one of the two types. It is not clear how you would like the details (json_ext of message) to be displayed with the count of success, fail, and error events. You could do stats ... by json_ext to see the counts by json_ext, but this would only be practical if the json_ext messages are not very different.
1. The best practice is to define as much as you can from the so-called magic eight. It boosts performance on ingestion and lets you avoid errors with line breaking or timestamp recognition. 2. Sour... See more...
1. The best practice is to define as much as you can from the so-called magic eight. It boosts performance on ingestion and lets you avoid errors with line breaking or timestamp recognition. 2. Sourcetype is just a value of metadata associated with an event. From technical point of view it doesn't have to be "defined" anywhere prior to setting given value as sourcetype. You could even create an index-time transform setting a sourcetype to a completely random value and Splunk would still work and process the events (although the effects might be far from desirable). It's the other way around - the value of the sourcetype metadata can - if configured properly - affect how Splunk processes events. BTW, sourcetype is just one of the metadata fields that comes into play when Splunk decides what to do with an event. Other ones are source and host.
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a ... See more...
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a certain sourcetype, line-breaking is off, and when looking for the sourcetype on the indexers (via support) and on the on-prem HF where the data is ingested and I cannot find the props for this sourcetype. So I wonder is it possible to have no sourcetype anywhere for a particular source?
But the aliases must be defined within an app. If that app is not exporting objects, it might cause a problem. Anyway, global is one thing (exporting globally lets you use the knowledge objects in o... See more...
But the aliases must be defined within an app. If that app is not exporting objects, it might cause a problem. Anyway, global is one thing (exporting globally lets you use the knowledge objects in other apps' scopes), permissions assigned to a knowledge object is something else (you could export globally but only give permissions to selected roles).
Ahh, indeed missed the "when indexing" part but I'd assume it was due to misunderstanding by @uagraw01 how field extractions work - they indeed mostly work during search phase, not while indexing the... See more...
Ahh, indeed missed the "when indexing" part but I'd assume it was due to misunderstanding by @uagraw01 how field extractions work - they indeed mostly work during search phase, not while indexing the events. But in case it was really meant as "index-time aliases" - there is no such thing. Aliasing is always done during search time. But yes, you can specify multiple field aliases in one alias group (you can check it out in GUI and check what conf file the server writes :-)).
The FIELDALIAS attribute extracts fields at search time rather than at index time as requested. IME, it's unusual to have a single FIELDALIAS attribute define more than one alias.  Be sure the props... See more...
The FIELDALIAS attribute extracts fields at search time rather than at index time as requested. IME, it's unusual to have a single FIELDALIAS attribute define more than one alias.  Be sure the props.conf file has line continuation characters (\) between each alias as shown in props.conf.spec.  If that doesn't work, then use a separate FIELDALIAS setting for each alias.
The best solution is to rewrite playbooks to break them up into smaller playbooks. If that is not workable, then the next best solution is to have a 5.x SOAR environment to maintain the playbooks. Oth... See more...
The best solution is to rewrite playbooks to break them up into smaller playbooks. If that is not workable, then the next best solution is to have a 5.x SOAR environment to maintain the playbooks. Other than that, you could use the dev tools in your browser to take performance measurements, then selectively disable things to see if they increase performance, but this is hacky and may have side effects.
Are you getting the logs in via a Kapersky app? If so, is it possible to set the app to debug mode, or perhaps on the Kapersky side, so as to get more detailed log messages describing what component ... See more...
Are you getting the logs in via a Kapersky app? If so, is it possible to set the app to debug mode, or perhaps on the Kapersky side, so as to get more detailed log messages describing what component is not working?
I have a feeling that Splunk is automatically capping the number of rows when you use | timechart span=1s (this could result in 86400 rows per day), which would explain why your search works fine wit... See more...
I have a feeling that Splunk is automatically capping the number of rows when you use | timechart span=1s (this could result in 86400 rows per day), which would explain why your search works fine with 1-2 days but not with more than three. Maybe you could try binning the _time to a 1s value and then doing stats on it. index=test elb_status_code=200 | bin _time span=1s | stats count as total by _time | stats count as num_seconds by total | sort 0 total I am also curious how you got it to show values for total of 0. The count() function does not do that by default.
@PickleRick Permission is already set to global already for field alias.
Assuming your naming is OK, check the permissions.
Well, since you need to connect to an existing database, it must have been set up and be maintained by someone. That's the easiest way to find out - go and ask You can find most popular engines h... See more...
Well, since you need to connect to an existing database, it must have been set up and be maintained by someone. That's the easiest way to find out - go and ask You can find most popular engines here https://en.wikipedia.org/wiki/Relational_database#List_of_database_engines (among other sources).
@gcusello Thanks for you help to understand the issue.  Case1: Actually, there is no timestamp present in the provided csv. In the snapshot you're seeing the data is getting from sample i ingested... See more...
@gcusello Thanks for you help to understand the issue.  Case1: Actually, there is no timestamp present in the provided csv. In the snapshot you're seeing the data is getting from sample i ingested from the dev machine via UF, here even i am not able to see in the events no "timestamp" field.    Case2: When i upload the csv in the data inputs, after selecting the sourcetype as "cmkcsv" there it is showing the timestamp field. So here whatever settings i added in the advance it's not at all removing the warning flag as "failed to parse timestamp defaulting to file modtime" [ cmkcsv ] DATETIME_CONFIG=CURRENT INDEXED_EXTRACTIONS=csv KV_MODE=none LINE_BREAKER=\n\W NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y-%m-%d %H:%M:%S TRUNCATE=200 category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true TIME_PREFIX=^\w+\s*\w+,\s*\w+,\s* MAX_TIMESTAMP_LOOKAHEAD=20  
Hi @PickleRick    Thank you for your replay, as you probably noticed, I'm not a Data Base person. can you please explain me how I can find out what kind of Data Base I have? and what kind of data... See more...
Hi @PickleRick    Thank you for your replay, as you probably noticed, I'm not a Data Base person. can you please explain me how I can find out what kind of Data Base I have? and what kind of data base there are?
Not sure if this will help but you could try  | sort 0 total
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in inte... See more...
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in interesting fields.As well as i attched the screenshot of Splunk UI results in the attachment. Please guide me what i need to change in the setting? [demo] KEEP_EMPTY_VALS = false KV_MODE = xml LINE_BREAKER = <\/eqtext:EquipmentEvent>() MAX_TIMESTAMP_LOOKAHEAD = 24 NO_BINARY_CHECK = true SEDCMD-first = s/^.*<eqtext:EquipmentEvent/<eqtext:EquipmentEvent/g SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3f%Z TIME_PREFIX = ((?<!ReceiverFmInstanceName>))<eqtext:EventTime> TRUNCATE = 100000000 category = Custom disabled = false pulldown_type = true FIELDALIAS-fields_scada_xml = "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.AreaID" AS area "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ElementID" AS element "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.EquipmentID" AS equipment "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ZoneID" AS zone "eqtext:EquipmentEvent.eqtext:ID.eqtext:Description" AS description "eqtext:EquipmentEvent.eqtext:ID.eqtext:MIS_Address" AS mis_address "eqtext:EquipmentEvent.eqtext:Detail.State" AS state "eqtext:EquipmentEvent.eqtext:Detail.eqtext:EventTime" AS event_time "eqtext:EquipmentEvent.eqtext:Detail.eqtext:MsgNr" AS msg_nr "eqtext:EquipmentEvent.eqtext:Detail.eqtext:OperatorID" AS operator_id "eqtext:EquipmentEvent.eqtext:Detail.ErrorType" AS error_type "eqtext:EquipmentEvent.eqtext:Detail.Severity" AS severity ================================= <eqtext:EquipmentEvent xmlns:eqtext="http://vanderlande.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://vanderlande.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://vanderlande.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>8503</AreaID><ZoneID>3</ZoneID><EquipmentID>3</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> LMS not healthy</eqtext:Description><eqtext:MIS_Address>0.3</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>WENT_OUT</State><eqtext:EventTime>2024-04-02T21:09:38.337Z</eqtext:EventTime><eqtext:MsgNr>4657614997395580315</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent>    
The path looks good. Assuming your index=sysmon exists, it should bring in logs. Give it a shot and see if the logs come in.