All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, We are want the run a modular input cleanup. What happens to the checkpoints? Will ingest from start from beginning again?   Thanks Nick
Interesting, it does look like you can't use a token as an attribute value in XML. Not sure if that can be changed
I suspect also that you did not post your message text field, as that rex statement would not produce the results you gave due to \D+ Can you post your message text field completely
Hi It's just like @ITWhisperer said. There must be some way. how you can combine those events which belongs to one transaction. With your current example there haven't been any information about tha... See more...
Hi It's just like @ITWhisperer said. There must be some way. how you can combine those events which belongs to one transaction. With your current example there haven't been any information about that. When you can found some common information which are on all of those then you can you try e.g. @gcusello's  way to combine those together. I assume that there could be outputs from several process on one or more nodes which generates those log events? If there is only one node and only one process at time, then you can use @gcusello's example as is. Best way to continue this is ask that developer add some unique transaction id (e.g uuidgen -> B49A0412-3EBB-4377-A026-D8E43EC9F7F1 different output on every run) on logs which we could use to combine transactions together. r. Ismo
I'm confused ... why have you not just done | eval "Sequence Number"=split('Message Text', ",") | table Sequence Number as advised earlier? Substitute the actual field name for Message Text above
Hi If you have used those IP:s on alert's SPL then those are searchable by REST. But if you are looking a way to find  those IP which are a result set for some alert then that could be quite hard. ... See more...
Hi If you have used those IP:s on alert's SPL then those are searchable by REST. But if you are looking a way to find  those IP which are a result set for some alert then that could be quite hard. With REST you could found all alerts and search commands what those are using. Then you can try to extract IPs from those search or lookups which those are used. Here is one SPL which I have used to get a list of all alerts and reports by SPL | rest /servicesNS/-/-/saved/searches splunk_server=local | search disabled=0 AND is_scheduled=1 ```| search NOT eai:acl.app IN (splunk_instrumentation splunk_rapid_diag splunk_archiver splunk_monitoring_console splunk_app_db_connect splunk_app_aws Splunk_TA_aws Splunk_ML_Toolkit )``` | rename "alert.track" as alert_track | eval type=case(alert_track=1, "alert", (isnotnull(actions) AND actions!="") AND (isnotnull(alert_threshold) AND alert_threshold!=""), "alert", (isnotnull(alert_comparator) AND alert_comparator!="") AND (isnotnull(alert_type) AND alert_type!="always"), "alert", true(), "report") | fields title type eai:acl.app is_scheduled description search disabled triggered_alert_count actions action.script.filename alert.severity cron_schedule disabled ```| where type = "alert" ``` | dedup title eai:acl.app | sort eai:acl.app title Just add ' | where type = "alert" ' to the end and you will get only alerts. Then continue with field search to look alert's SPL command etc.. r. Ismo
No you can't. Excels xlsx file's format is something else than ASCII/UTF-8 text with separator. It's somekind of XML file. If you want to use files from Excel you should export those to CSV before u... See more...
No you can't. Excels xlsx file's format is something else than ASCII/UTF-8 text with separator. It's somekind of XML file. If you want to use files from Excel you should export those to CSV before use. Then you can use that CSV file. On splunkbase there seems to be app https://splunkbase.splunk.com/app/6969. Unfortunately it seems to be licensed or something similar as it's not freely downloable. Also it just export data to excel not read from excel, if I have understood right.
can i use xlsx file . here
Hi could it be that the start of this file is same in every day? That way it could be seen as a same file by splunk? You could try to add  initCrcLength = <integer> * How much of a file, in bytes, ... See more...
Hi could it be that the start of this file is same in every day? That way it could be seen as a same file by splunk? You could try to add  initCrcLength = <integer> * How much of a file, in bytes, that the input reads before trying to identify whether it has already seen the file. * You might want to adjust this if you have many files with common headers (comment headers, long CSV headers, etc) and recurring filenames. * Cannot be less than 256 or more than 1048576. * CAUTION: Improper use of this setting causes data to be re-indexed. You might want to consult with Splunk Support before adjusting this value - the default is fine for most installations. * Default: 256 (bytes) into inputs.conf to tackle that. Probably this is not enough as if the content has just change? Then you can try to add CHECK_METHOD into props.conf File checksum configuration CHECK_METHOD = [endpoint_md5|entire_md5|modtime] * Set CHECK_METHOD to "endpoint_md5" to have Splunk software perform a checksum of the first and last 256 bytes of a file. When it finds matches, Splunk software lists the file as already indexed and indexes only new data, or ignores it if there is no new data. * Set CHECK_METHOD to "entire_md5" to use the checksum of the entire file. * Set CHECK_METHOD to "modtime" to check only the modification time of the file. * Settings other than "endpoint_md5" cause Splunk software to index the entire file for each detected change. * This option is only valid for [source::<source>] stanzas. * This setting applies at input time, when data is first read by Splunk software, such as on a forwarder that has configured inputs acquiring the data. * Default: endpoint_md5 initCrcLength = <integer> * See documentation in inputs.conf.spec. I hope that those will help you. r. Ismo
Hi here is the simplest way | makeresults | search foo | outputlookup foo.csv   Basically you should create "empty" resultset and then forward it to outputlookup command. r. Ismo 
How to create empty.csv lookup in web
I ask this question  because i occured a issues about UF collection, i have some floder named as date, for example, date is 2023-08-31, and then the log file will be placed here, and so on,  but the... See more...
I ask this question  because i occured a issues about UF collection, i have some floder named as date, for example, date is 2023-08-31, and then the log file will be placed here, and so on,  but the log file name may same,but the content is different,  and then i found there is a so strange phenomena, Collecting data always stops the previous day,for example, today is 2023-09-01, it stop at yesterday 2023-08-31, It will not collect the logs generated today, the file names in these two folders are the same, but the content,size,modified time is different, and I have also added the crcSalt parameter , and it will collect data again after i restart UF,  it cycles this phenomenon every day until I restart.  so is there any parameter for this? thanks so much.   my inputs as below: [monitor:///mnt/business/pvc-6e1ed89e/privopen/*/open-test*.log] disabled = 0 host = myhost index = test_index crcSalt=<SOURCE> sourcetype = test_business _TCP_ROUTING=azure_hf      
Hello @bowesmana Example All number's are Numeric only  message text fields from  1st event: Sequence numbers 00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,... See more...
Hello @bowesmana Example All number's are Numeric only  message text fields from  1st event: Sequence numbers 00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,00000000000000872514 message text fields from  2nd event: Sequence numbers 00000000000000872515,00000000000000872516,00000000000000872518,00000000000000872519,00000000000000872520 From the logs 00000000000000872517 was missing so need to check missing of sequence that is  condition actually.. just need to check number is to be correct format ( one by one) or if its not correct need to throw alerts please suggest using regex expression for this issue, Below query i can able take first value from mentioned logs(events) | rex field= cip:  Audit Message . Message Text"\D+(?<Sequence Number>\d+)"  | table Sequence Number Output : 00000000000000872510 00000000000000872515 but i need whole sequence number in statistic table  one by one, Hope u understood  Thanks in advance
Hi  Very first question. I have created an inbuilt panel and I didn't want to hardcode the name in the ref parameter Splunk doesn't like the following any ideas? or advice? <init>   <set to... See more...
Hi  Very first question. I have created an inbuilt panel and I didn't want to hardcode the name in the ref parameter Splunk doesn't like the following any ideas? or advice? <init>   <set token="PageName">help_for_toy</set> </init>   <row>      <panel id="help" ref="$PageName$" ></panel>   </row>  
Metrics will be as below. along with various other Limit monitoring... Custom Metrics|Limits|Accounts|<accountName>|Node|Maximum Limit Custom Metrics|Limits|Accounts|<accountName>|Node|Cur... See more...
Metrics will be as below. along with various other Limit monitoring... Custom Metrics|Limits|Accounts|<accountName>|Node|Maximum Limit Custom Metrics|Limits|Accounts|<accountName>|Node|Current Usage Custom Metrics|Limits|Accounts|<accountName>|Node|Status
As @yuanliu points out, under certain circumstances, the following are functionally the same index=Test field1 field2 field3 index=Test "field1"="*" "field2"="*" "field3"="*" However, from Splunk... See more...
As @yuanliu points out, under certain circumstances, the following are functionally the same index=Test field1 field2 field3 index=Test "field1"="*" "field2"="*" "field3"="*" However, from Splunk's point of view they are very different. In the first case, the search is looking for a piece of TEXT in the _raw event called 'field1'  or 2/3 whereas in the second, it's looking for a field called field1 that is extracted and has some value, so considering these two _raw example events  2023-08-31T08:00:00 field1="Hello"  2023-08-31T08:00:00 Hello="field1"  The first search will find both events, whereas the second search will only find the SECOND event. Here's an example to demonstrate | makeresults | eval x=split("2023-08-31T08:00:00 field1=\"Hello\",2023-08-31T08:00:00 Hello=\"field1\"", ",") | mvexpand x | eval _time=strptime(x,"%FT%T") | rename x as _raw | extract | search field1 this finds both events, but if you change the last line to search field1=* you will only get one event. As for validating your data, you can clearly not go through 24m events, so you would have to do aggregations and check numbers and can only validate if you know what you expect. Making all those wildcard searches is not particular performant, and as that picks up most events, then you may want to turn that into a NOT search, by index=o365 NOT (f1=* f2=*...) which should return the 1k not found  Don't forget that Splunk is returning you _raw events and doing field extraction, so when you say you only want 40 fields, the just deal with all the events and after doing any data processing you need, validate for the events you want to exclude by filtering at a later point in the Splunk pipeline. For example, if your events you do NOT want do not have an Operation field, then  | stats count by Operation will actually filter those events that don't have the operation field anyway and would be much faster than your complex wildcard search  
What is the equivalent of <html /> in Dashboard Studio?  I wanted to have some static links.
@Naga1  The split command is not for a static set of numbers, it will split whatever 'dynamic' numbers you have, whether that is 00000000000000872510,00000000000000872511,00000000000000872512,00000... See more...
@Naga1  The split command is not for a static set of numbers, it will split whatever 'dynamic' numbers you have, whether that is 00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,00000000000000872514 or 00000000000000999995,00000000000000999996,00000000000000999997,00000000000000999998,00000000000000999999 so perhaps you can give a clearer example of what your data might look like so we can understand what you mean by dynamic. Are you trying to say that you have a single field called MessageText that may have ABC,12345,XYZ,98765,Hello,444444,Goodbye,777777 and you want to extract all numeric sequences from it? If so, give some examples of what the data will look like, so we can work out a suitable matching/extraction pattern
As you must be familiar by now, the answer to any data analysis question depends on data.  If strings "field1", "field2", etc., appears in raw data AND signifies the existence of field names of same,... See more...
As you must be familiar by now, the answer to any data analysis question depends on data.  If strings "field1", "field2", etc., appears in raw data AND signifies the existence of field names of same, you are correct that index=Test field1 field2 field3 and index=Test "field1"="*" "field2"="*" "field3"="*" are functionally equivalent. (Even in such cases, semantic differences can still cause performance differences depending on the inner workings of search engine.)  In some cases, however, a field can exist without the field name appearing in raw data; or the field name may exist in raw data but not as a term in SPL sense.  In such cases, the two are functionally different. For example, Splunk may extract from raw data "field1_abcd" to give field1=abcd.  Search "index=Test field1" will not find this one. Hope this helps.
Your original post says "create an alert if there is an increment."  If you want to alert when there is no change, i.e., no increment or decrement, the formula would be simpler because we don't have ... See more...
Your original post says "create an alert if there is an increment."  If you want to alert when there is no change, i.e., no increment or decrement, the formula would be simpler because we don't have calculate whether a change is an increment or decrement. | stats list(_time) as _time list(event_id) as event_id by event_name task_id | where mvindex(event_id, 0) = mvindex(event_id, -1) | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q") This is a modified emulation where task_id 0 has no change in event_id | makeresults | eval data = split("8/01/2023 3:52:40.395 PM server_state|3 1223681 5 8/01/2023 3:50:40.395 PM server_state|2 1201257 3 8/01/2023 3:45:40.395 PM server_state|1 1135465 2 8/01/2023 3:41:40.395 PM server_state|0 1545467 5 8/01/2023 3:36:40.395 PM server_state|3 1223680 0 8/01/2023 3:25:40.395 PM server_state|2 1201256 2 8/01/2023 3:15:40.395 PM server_state|1 1135464 3 8/01/2023 3:10:40.395 PM server_state|0 1545467 8", " ") | mvexpand data | rename data as _raw | rex "(?<ts>(\S+\s){3}) (?<event_name>\w+)\|(?<task_id>\d+) (?<event_id>\d+)" | eval _time = strptime(ts, "%m/%d/%Y %I:%M:%S.%3Q %p") ``` data emulation above ``` This gives you the exact output you ask.