All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

But the aliases must be defined within an app. If that app is not exporting objects, it might cause a problem. Anyway, global is one thing (exporting globally lets you use the knowledge objects in o... See more...
But the aliases must be defined within an app. If that app is not exporting objects, it might cause a problem. Anyway, global is one thing (exporting globally lets you use the knowledge objects in other apps' scopes), permissions assigned to a knowledge object is something else (you could export globally but only give permissions to selected roles).
Ahh, indeed missed the "when indexing" part but I'd assume it was due to misunderstanding by @uagraw01 how field extractions work - they indeed mostly work during search phase, not while indexing the... See more...
Ahh, indeed missed the "when indexing" part but I'd assume it was due to misunderstanding by @uagraw01 how field extractions work - they indeed mostly work during search phase, not while indexing the events. But in case it was really meant as "index-time aliases" - there is no such thing. Aliasing is always done during search time. But yes, you can specify multiple field aliases in one alias group (you can check it out in GUI and check what conf file the server writes :-)).
The FIELDALIAS attribute extracts fields at search time rather than at index time as requested. IME, it's unusual to have a single FIELDALIAS attribute define more than one alias.  Be sure the props... See more...
The FIELDALIAS attribute extracts fields at search time rather than at index time as requested. IME, it's unusual to have a single FIELDALIAS attribute define more than one alias.  Be sure the props.conf file has line continuation characters (\) between each alias as shown in props.conf.spec.  If that doesn't work, then use a separate FIELDALIAS setting for each alias.
The best solution is to rewrite playbooks to break them up into smaller playbooks. If that is not workable, then the next best solution is to have a 5.x SOAR environment to maintain the playbooks. Oth... See more...
The best solution is to rewrite playbooks to break them up into smaller playbooks. If that is not workable, then the next best solution is to have a 5.x SOAR environment to maintain the playbooks. Other than that, you could use the dev tools in your browser to take performance measurements, then selectively disable things to see if they increase performance, but this is hacky and may have side effects.
Are you getting the logs in via a Kapersky app? If so, is it possible to set the app to debug mode, or perhaps on the Kapersky side, so as to get more detailed log messages describing what component ... See more...
Are you getting the logs in via a Kapersky app? If so, is it possible to set the app to debug mode, or perhaps on the Kapersky side, so as to get more detailed log messages describing what component is not working?
I have a feeling that Splunk is automatically capping the number of rows when you use | timechart span=1s (this could result in 86400 rows per day), which would explain why your search works fine wit... See more...
I have a feeling that Splunk is automatically capping the number of rows when you use | timechart span=1s (this could result in 86400 rows per day), which would explain why your search works fine with 1-2 days but not with more than three. Maybe you could try binning the _time to a 1s value and then doing stats on it. index=test elb_status_code=200 | bin _time span=1s | stats count as total by _time | stats count as num_seconds by total | sort 0 total I am also curious how you got it to show values for total of 0. The count() function does not do that by default.
@PickleRick Permission is already set to global already for field alias.
Assuming your naming is OK, check the permissions.
Well, since you need to connect to an existing database, it must have been set up and be maintained by someone. That's the easiest way to find out - go and ask You can find most popular engines h... See more...
Well, since you need to connect to an existing database, it must have been set up and be maintained by someone. That's the easiest way to find out - go and ask You can find most popular engines here https://en.wikipedia.org/wiki/Relational_database#List_of_database_engines (among other sources).
@gcusello Thanks for you help to understand the issue.  Case1: Actually, there is no timestamp present in the provided csv. In the snapshot you're seeing the data is getting from sample i ingested... See more...
@gcusello Thanks for you help to understand the issue.  Case1: Actually, there is no timestamp present in the provided csv. In the snapshot you're seeing the data is getting from sample i ingested from the dev machine via UF, here even i am not able to see in the events no "timestamp" field.    Case2: When i upload the csv in the data inputs, after selecting the sourcetype as "cmkcsv" there it is showing the timestamp field. So here whatever settings i added in the advance it's not at all removing the warning flag as "failed to parse timestamp defaulting to file modtime" [ cmkcsv ] DATETIME_CONFIG=CURRENT INDEXED_EXTRACTIONS=csv KV_MODE=none LINE_BREAKER=\n\W NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y-%m-%d %H:%M:%S TRUNCATE=200 category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true TIME_PREFIX=^\w+\s*\w+,\s*\w+,\s* MAX_TIMESTAMP_LOOKAHEAD=20  
Hi @PickleRick    Thank you for your replay, as you probably noticed, I'm not a Data Base person. can you please explain me how I can find out what kind of Data Base I have? and what kind of data... See more...
Hi @PickleRick    Thank you for your replay, as you probably noticed, I'm not a Data Base person. can you please explain me how I can find out what kind of Data Base I have? and what kind of data base there are?
Not sure if this will help but you could try  | sort 0 total
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in inte... See more...
  Hello Splunkers!! Below are the sample event and I want to extract some fields into the Splunk while indexing. I have used below props.conf to extract fields but nothing coming to Splunk in interesting fields.As well as i attched the screenshot of Splunk UI results in the attachment. Please guide me what i need to change in the setting? [demo] KEEP_EMPTY_VALS = false KV_MODE = xml LINE_BREAKER = <\/eqtext:EquipmentEvent>() MAX_TIMESTAMP_LOOKAHEAD = 24 NO_BINARY_CHECK = true SEDCMD-first = s/^.*<eqtext:EquipmentEvent/<eqtext:EquipmentEvent/g SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3f%Z TIME_PREFIX = ((?<!ReceiverFmInstanceName>))<eqtext:EventTime> TRUNCATE = 100000000 category = Custom disabled = false pulldown_type = true FIELDALIAS-fields_scada_xml = "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.AreaID" AS area "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ElementID" AS element "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.EquipmentID" AS equipment "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ZoneID" AS zone "eqtext:EquipmentEvent.eqtext:ID.eqtext:Description" AS description "eqtext:EquipmentEvent.eqtext:ID.eqtext:MIS_Address" AS mis_address "eqtext:EquipmentEvent.eqtext:Detail.State" AS state "eqtext:EquipmentEvent.eqtext:Detail.eqtext:EventTime" AS event_time "eqtext:EquipmentEvent.eqtext:Detail.eqtext:MsgNr" AS msg_nr "eqtext:EquipmentEvent.eqtext:Detail.eqtext:OperatorID" AS operator_id "eqtext:EquipmentEvent.eqtext:Detail.ErrorType" AS error_type "eqtext:EquipmentEvent.eqtext:Detail.Severity" AS severity ================================= <eqtext:EquipmentEvent xmlns:eqtext="http://vanderlande.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://vanderlande.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://vanderlande.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>8503</AreaID><ZoneID>3</ZoneID><EquipmentID>3</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> LMS not healthy</eqtext:Description><eqtext:MIS_Address>0.3</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>WENT_OUT</State><eqtext:EventTime>2024-04-02T21:09:38.337Z</eqtext:EventTime><eqtext:MsgNr>4657614997395580315</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent>    
The path looks good. Assuming your index=sysmon exists, it should bring in logs. Give it a shot and see if the logs come in.
Hi, I have this search for example: index=test elb_status_code=200  | timechart count as total span=1s | stats count as num_seconds by total | sort by total When I search this for 1,2 days - my re... See more...
Hi, I have this search for example: index=test elb_status_code=200  | timechart count as total span=1s | stats count as num_seconds by total | sort by total When I search this for 1,2 days - my result includes total of 0,1,2,3 etc.. when i go above, 3 days for example - I loose all the data about the 0 value and my results start with 1,2,3 etc..  Anyone could explain this? am I doing something wrong or could this be a bug somewhere? 
Hi, I got an error while i want to get logs from Kaspersky Console. I`ve done all the tasks to add it such as port,IP , .... index="kcs" Type=Error Message="Cannot start sending events to the SIEM ... See more...
Hi, I got an error while i want to get logs from Kaspersky Console. I`ve done all the tasks to add it such as port,IP , .... index="kcs" Type=Error Message="Cannot start sending events to the SIEM system. Functionality in limited mode. Area: System Management."
Hi @phanikumarcs, the timestamp field is one of the columns of your csv file or it's automatically generated by Splunk because it isn't present in the csv file? I don't see the timestamp field in t... See more...
Hi @phanikumarcs, the timestamp field is one of the columns of your csv file or it's automatically generated by Splunk because it isn't present in the csv file? I don't see the timestamp field in the screenshot you shared. In your screenshot and in your table there are only the following fields: Subscription Name, Resource Group Name, Key Vault Name, Secret Name, Expiration Date, Months. Ciao. Giuseppe  
@gcusello yeah i tried the data add via upload, there when i select sourcetype as csv there i can see the timestamp field. 
hi experts seek assistance with configuring Sysmon for inputs.conf on a Splunk Universal Forwarder. Configuration based on the Splunk Technology Add-on (TA) for Sysmon. [WinEventLog://Microsoft... See more...
hi experts seek assistance with configuring Sysmon for inputs.conf on a Splunk Universal Forwarder. Configuration based on the Splunk Technology Add-on (TA) for Sysmon. [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = 1 source = XmlWinEventLog:Microsoft-Windows-Sysmon/Operational index = sysmon is this the correct config ?
I created an API test with Synthetics but I can't set up a detector to check if 2 consecutive requests (2 in a row) are in error. Is there any way to configure the detector to raise an alarm if 2 req... See more...
I created an API test with Synthetics but I can't set up a detector to check if 2 consecutive requests (2 in a row) are in error. Is there any way to configure the detector to raise an alarm if 2 requests in a row go in error?