All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  Can you please help me to find out how we can find the count of events between the 2 events in SPLUNK.  Example , i have to find the count of events (RPWARDA , SPWARAA , SPWARRA ) between eve... See more...
Hi  Can you please help me to find out how we can find the count of events between the 2 events in SPLUNK.  Example , i have to find the count of events (RPWARDA , SPWARAA , SPWARRA ) between events IDJO20P and PIDZJEA.  IDJO20P to PIDZJEA will be considered a day and i have to find count of events (RPWARDA , SPWARAA , SPWARRA ) in a day.    SPLUNk Query to find the events: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P)    
I have the following query that gives me a list of pods that are missing based off the comparison of what should be deployed as defined in the pod_list.csv inputlookup.   index=abc sourcetype=kubec... See more...
I have the following query that gives me a list of pods that are missing based off the comparison of what should be deployed as defined in the pod_list.csv inputlookup.   index=abc sourcetype=kubectl importance=non-critical | dedup pod_name | eval Observed=1 | append [| inputlookup pod_list.csv | eval Observed=0 | eval importance=if(isnull(importance), "critical", importance) | search importance=non-critical] | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | eval importance=if(isnull(importance), "critical", importance | stats max(Observed) as Observed by pod_name_lookup, importance | where Observed=0 and importance="non-critical"     The data in the pod_list.csv looks like so: namespace pod_name_lookup importance ns1 kafka-* critical ns1 apache-* critical ns2 grafana-backup-* non-critical   This works as expected. I am now having difficulties creating a timechart with this data to be able to see when a pod wasnt deployed, not just what is currently missing. Any help is greatly appreciated.  
Hi  Just to add on this existing query, I need to get the memory details from REQUEST alone. My raw data is like the below and this memory is also not available in all the events. So i need to fetch... See more...
Hi  Just to add on this existing query, I need to get the memory details from REQUEST alone. My raw data is like the below and this memory is also not available in all the events. So i need to fetch a report with the events that are only having "memory" in the REQUEST.(Not all events have this "memory" in the REQUEST). Please help asap.  
Hi  Just to add on this existing query, I need to get the memory details from REQUEST alone. My raw data is like the below and this memory is also not available in all the events. So i need to fetch... See more...
Hi  Just to add on this existing query, I need to get the memory details from REQUEST alone. My raw data is like the below and this memory is also not available in all the events. So i need to fetch a report with the events that are only having "memory" in the REQUEST.(Not all events have this "memory" in the REQUEST). Please help asap.    
Hi  Just to add on this existing query, I need to get the memory details from REQUEST alone. My raw data is like the below and this memory is also not available in all the events. So i need to fetch... See more...
Hi  Just to add on this existing query, I need to get the memory details from REQUEST alone. My raw data is like the below and this memory is also not available in all the events. So i need to fetch a report with the events that are only having "memory" in the REQUEST.(Not all events have this "memory" in the REQUEST). Please help asap.    
@gcusello in case you just have an index and you have to find keywords inside of  this index, from which parameter you choose your keywords inside of this index? as we know in the left side  of splun... See more...
@gcusello in case you just have an index and you have to find keywords inside of  this index, from which parameter you choose your keywords inside of this index? as we know in the left side  of splunk you have many field with keywords.
Hi @isoutamo  So if I am using lookup editor, I don't need an intervention from the admin, including restarting or refreshing URL, correct? Thanks
If you want to monitor your SaaS application from the outside, there are also mechanisms in the observability components (like Real User Monitoring, Synthetic Monitoring, ...) available. 
That helps.  You can surely look on https://splunkbase.splunk.com if there is an add-on for your SaaS application. Usually you get the technical mechanisms in an add-on and the visual knowledge ob... See more...
That helps.  You can surely look on https://splunkbase.splunk.com if there is an add-on for your SaaS application. Usually you get the technical mechanisms in an add-on and the visual knowledge objects like dashboards in an app. But sometimes it's a combination. Please refer to the documentation of the app/add-on to see what it is capable of. If there is one, you would see that you get that into your Splunk environment. Either Splunk cloud or Splunk Enterprise. The add-on should be vetted for your instance and version.  After that you follow the instructions of the app/add-on to onboard the data.  If there is nothing available in splunkbase you would start from scratch. For that the add-on builder is a good start. You would create the mechanism to get the data from the SaaS REST API, extract the fields and create dashboards after that. That's the usual process. 
It works, thanks!
I have defined the following sourcetype for a CSV file data input without headers: [test_csv] SHOULD_LINEMERGE = false TRANSFORMS = drop_start_and_interim INDEXED_EXTRACTIONS = csv FIELD_NAMES =... See more...
I have defined the following sourcetype for a CSV file data input without headers: [test_csv] SHOULD_LINEMERGE = false TRANSFORMS = drop_start_and_interim INDEXED_EXTRACTIONS = csv FIELD_NAMES = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187 KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = 14 category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true When I index a test file I see that there is one of the destination fields that is not correctly extracted, this field is bounded by 2 double quotes and is extacted together with the next field as a single field. A sample raw with the problem is the following where I have marked the field in red: 2,"127.0.0.1",5060,"258670334_106281015@83.72.181.1","258670334_106281015@83.72.181.1","258670334_106281015@83.72.181.1","SIP",,,"<sip:+34765300391@83.72.181.1;user=phone>;tag=gK0a655dd7","<sip:+376826792@193.178.74.21;user=phone>",1,1611,"14:35:43.412 CET Jan 09 2024","14:35:52.884 CET Jan 09 2024","15:02:43.220 CET Jan 09 2024",1,"s0p2",53,"s0p0",52,"IMS","IX","localhost:154311320","PCMA","IX","83.72.181.97",40072,"193.178.74.21",20526,"IMS","10.12.162.20",16864,"10.12.45.10",25732,0,0,0,0,0,0,0,1,17551834,80513,9284,440,"localhost:154311321","PCMA","IMS","10.12.45.10",25732,"10.12.162.20",16864,"IX","193.178.74.21",20526,"83.72.181.97",40072,0,0,0,0,0,0,0,2,17552488,80516,9284,440,,,,"0.0.0.0",0,"0.0.0.0",0,,"0.0.0.0",0,"0.0.0.0",0,0,0,0,0,0,0,0,0,0,0,,,,"0.0.0.0",0,"0.0.0.0",0,,"0.0.0.0",0,"0.0.0.0",0,0,0,0,0,0,0,0,0,0,0,"bb6c6d3001911f060e83641d9e64",""aaa://inf.tsa"","SCZ9.0.0 Patch 2 (Build 211)","GMT-01:00",245,"sip:+376826792@193.178.74.21:5060;user=phone",,,,,"sip:+34765300391@83.72.181.1:5060;user=phone","193.178.74.21:5060","83.72.181.1:5060","10.12.193.4:5060","10.59.90.201:5060",,3,2,0,0,"sip:+376826792@FO01-vICSCF-01.ims.mnc006.mcc333.3gppnetwork.org:5060;user=phone",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"15:02:43.220 CET Jan 09 2024","15:02:43.220 CET Jan 09 2024","00:00:00.000 UTC Jan 01 1970","00:00:00.000 UTC Jan 01 1970","audio","audio",,,17551834,80513,17552052,80514,0,0,0,0,19516010 The content of the field 117 is: "aaa://inf.tsa","SCZ9.0.0 Patch 2 (Build 211) It corresponds to the fields 117 and 118 concatenated, and the following fields are all offset one position I have tried to replace the 2 double quotes by 1 in 2 ways:  Adding the line SEDCMD = s/""/"/g at the first line in the sourcetype definition in the props.conf but it only changes the _raw and still have the same issue extracting the field 117 and the offset of the following fields I have tried to overwrite de _raw replacing the 2 double quotes by 1 with the following transforms: [rewrite_raw] INGEST_EVAL = _raw:=replace(_raw, "\"\"", "\"") Applied in the sourcetype after the other transform that drops some kind of rows based on the value of the first field TRANSFORMS = drop_start_and_interim, rewrite_raw And the result is the same, the _raw is changed but the issue extracting the filed 117 and offset of the followings persists I also have tried to rewrite the _raw with the following transform and it neither has solved the problem, the result has been the same: [remove_double_quotes] SOURCE_KEY = _raw REGEX = (?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*?)(?:\"\"|\"|)\,(?:\""|\"|)(.*)(?:\"\"|\"|) FORMAT = "$1","$2","$3","$4","$5","$6","$7","$8","$9","$10","$11","$12","$13","$14","$15","$16","$17","$18","$19","$20","$21","$22","$23","$24","$25","$26","$27","$28","$29","$30","$31","$32","$33","$34","$35","$36","$37","$38","$39","$40","$41","$42","$43","$44","$45","$46","$47","$48","$49","$50","$51","$52","$53","$54","$55","$56","$57","$58","$59","$60","$61","$62","$63","$64","$65","$66","$67","$68","$69","$70","$71","$72","$73","$74","$75","$76","$77","$78","$79","$80","$81","$82","$83","$84","$85","$86","$87","$88","$89","$90","$91","$92","$93","$94","$95","$96","$97","$98","$99","$100","$101","$102","$103","$104","$105","$106","$107","$108","$109","$110","$111","$112","$113","$114","$115","$116","$117","$118","$119","$120","$121","$122","$123","$124","$125","$126","$127","$128","$129","$130","$131","$132","$133","$134","$135","$136","$137","$138","$139","$140","$141","$142","$143","$144","$145","$146","$147","$148","$149","$150","$151","$152","$153","$154","$155","$156","$157","$158","$159","$160","$161","$162","$163","$164","$165","$166","$167","$168","$169","$170","$171","$172","$173","$174","$175","$176","$177","$178","$179","$180","$181","$182","$183","$184","$185","$186","$187" DEST_KEY =_raw Is there any way to solve this problem? Thank you
Sounds like when when you generated the key for your Splunk web server (privKeyPath = /opt/splunk/etc/auth/mycerts/myServerPrivateKey.key) you set the password/passphrase) So now you need to use t... See more...
Sounds like when when you generated the key for your Splunk web server (privKeyPath = /opt/splunk/etc/auth/mycerts/myServerPrivateKey.key) you set the password/passphrase) So now you need to use this in the web.conf (Password that protects the private key specified by 'privKeyPath'.) sslPassword = (your privKeyPath Key password/passphrase) Set your Password with the above settings, under [settings] and restart - see if that works. 
Please change the sourcetype and try
I do understand that "my_field" was just a placeholder since you did not know the name of my tokens.  My actual field is status.errorCode and creating a token "errorCode" from that does pull my resul... See more...
I do understand that "my_field" was just a placeholder since you did not know the name of my tokens.  My actual field is status.errorCode and creating a token "errorCode" from that does pull my results into the dashboard.  The problem comes when I tried to filter my token "errorCode" to show anything that isn't a value of 0.   I posted my code in another reply here.
Has anyone worked with ./splunk check-integrity and if yes do you know how to interpret the results? This link does not provide information on how to interpret the results - https://docs.splunk.com/D... See more...
Has anyone worked with ./splunk check-integrity and if yes do you know how to interpret the results? This link does not provide information on how to interpret the results - https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Dataintegritycontrol I was provided cursory information but it still does not tell me enough to know when a compromise may have occurred and where. Example
Thanks again! The btool command is new to me. The results are much bigger in Forwarder 2, so there might be something in there.
Well... it's kinda complicated because we're talking about CSV Normally most of the data is just split into separate events (usually one event per line), some metadata is added and the fields are... See more...
Well... it's kinda complicated because we're talking about CSV Normally most of the data is just split into separate events (usually one event per line), some metadata is added and the fields are extracted in search time. But in case of CSV the fields can be split right in the moment of ingestion and can be indexed and immutable after ingestion (so called indexed-extractions). So it depends heavily on your configuration.
Where you able to get past this error? If so what was the resolution as I am facing the same issue now.
I tried the command you gave me, but nothing is displayed when adding _time in the BY. Additionally, I added other data, but I would like to display one user per line rather than grouping multiple... See more...
I tried the command you gave me, but nothing is displayed when adding _time in the BY. Additionally, I added other data, but I would like to display one user per line rather than grouping multiple users together because they share the same IP address. For instance, on a certain IP address, multiple services were used, but I don't know which service was used. So, if we display one user per line, I think it will be unnecessary to use earliest and latest and just display the correct _time, right? (index="index1" Users =* IP=*) OR (index="index2" tag=1 ) | where NOT match(Users, "^AAA-[0-9]{5}\$") | eval IP=if(match(IP, "^::ffff:"), replace(IP, "^::ffff:(\d+\.\d+\.\d+\.\d+)$", "\1"), IP) | eval ip=coalesce(IP,srcip) | stats dc(index) AS index_count values(Users) AS Users values(destip) AS destip values(service) AS service earliest(_time) AS earliest latest(_time) AS latest BY ip | where index_count>1 | eval earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table Users, ip, dest_ip, service, earliest, latest
Hello there! After following this docs https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoself-signcertificates, https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoprepar... See more...
Hello there! After following this docs https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoself-signcertificates, https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/HowtoprepareyoursignedcertificatesforSplunk, https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/SecureSplunkWebusingasignedcertificate for SSL certificate installation I receive an error message when I tried to restart Splunk: Cannot decrypt private key in "/opt/splunk/etc/auth/mycerts/myServerPrivateKey.key" without a password web.conf [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/myServerPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/splunkCert.pem Any solutions for this issue will be appreciated!