All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I am new to Splunk and  I have been trying to do a complex report that I haven't been able to solve so please any help would appreciate a lot. I need to create a table like this: ... See more...
Hi everyone, I am new to Splunk and  I have been trying to do a complex report that I haven't been able to solve so please any help would appreciate a lot. I need to create a table like this: ID Name Function   Device Number  Unit  1 AAA23 Allocate A1 12 U1       A2 15 U2       A3 13 U1       A4 12 U4 2 AAA23 Allocate A1 12 U1 3 AAA23 Deallocate A1 12 U1       A2 15 U2 Here are the three events in JSON format: 1{"ID":"1","NAME":"AAA23","FUNCTION":"1", "DEVICE_001":”A1”,”NUMBER_001”:12,”UNIT_001”:”U1”, "DEVICE_002":”A2”,”NUMBER_002”:15,”UNIT_002”:”U2”, "DEVICE_003":”A3”,”NUMBER_003”:13,”UNIT_003”:”U1”, "DEVICE_004":”A4”,”NUMBER_004”:12,”UNIT_004”:”U4”} 2 {"ID":"2","NAME":"AAA23","FUNCTION":"1", "DEVICE_001":”A1”,”NUMBER_001”:12,”UNIT_001”:”U1” } 3{"ID":"3","NAME":"AAA23","FUNCTION":"2", "DEVICE_001":”A1”,”NUMBER_001”:12,”UNIT_001”:”U1”, "DEVICE_002":”A2”,”NUMBER_002”:15,”UNIT_002”:”U2”) As you can see the name of the fields DEVICE, NUMBER and UNIT depends on the number of entries in the NAME & ID fields so, sometimes for the same NAME & ID   field values I have 50 different name fields with a consecutive number, so as an example the previous fields are: DEVICE_001 ,DEVICE_002,…,DEVICE_050 NUMBER_001, NUBMER _002…., NUMBER _050, UNIT_001, UNIT_002,…, UNIT_050 And sometimes only 1 entry . this is variable and don´t depend on a specific field name.   With this in mind my question is how I can set this search on a Table Splunk: I have been trying the next: index=dataexample  |spath |rex "DEVICE_\d+":"(?P<DEVICE_1>[a-zA-Z0-9]+)" max_match=0 |rex "NUMBER_\d+":(?P<NUMBER_1>\d+)" max_match=0 |rex "UNIT_\d+":"(?P<UNIT_1>[a-zA-Z0-9]+)" max_match=0 |eval TIPO=case(FUNCTION ==01,"ALLOCATE", FUNCTION ==02,"DEALLOCATE", FUNCTION ==03, "OTHER") | stats values(NAME),values(TIPO),values(DEVICE_1), values(NUMBER_1), values(UNIT_1) by ID But I don´t know how to set all the variable( 1 or 50 or 60 ) field values in just one column per each DEVICE, NUMBER, UNIT per each event.
I'm trying to query Observability cloud for list of traces between a custom time frame.  From the API reference, I do not see an endpoint for this query.  Any help would be appreciated
Hello, So I have been working on this for a few days, looking at numerous Splunk responses but have yet to find something that works for my situation. So I have a large inventory of servers that ... See more...
Hello, So I have been working on this for a few days, looking at numerous Splunk responses but have yet to find something that works for my situation. So I have a large inventory of servers that I search through and currently use a general IN query in my searches but some querys have over 20 or so servers to search through and want to simplify it. So I am currently using something like this that works but can be exceedingly large depending on what servers I need to look up:   index=myindex hosts IN (server1,server2,server3) <mysearchquery>   So I had a bright idea of creating a lookup table to group the servers together. The lookup table: group,server group1,server1 group1,server2 group1,server3 group2,server4 group2,server5 I can get the desired list of servers by doing the following: |inputlookup lookuptable.csv | search group=group1 | fields server This would return: server1 server2 but applying it to my search has proved a lot more difficult. I think I was close with this one but have not quite figured it out yet:   index=myindex <Search> [ |inputlookup lookuptable.csv | search group=group1 | fields server ]   Any suggestions would be greatly appreciated, or a link to similar posts for me to review.
Has anyone integrated with Splunk from the controller and used extra parameters?  I have tried adding index=* as a parameter, but when Splunk query page opens the extra parameter is not part of the s... See more...
Has anyone integrated with Splunk from the controller and used extra parameters?  I have tried adding index=* as a parameter, but when Splunk query page opens the extra parameter is not part of the search string.
Hi, I have a dashboard with multiple table views from different indexes and just wondered if it is possible to combine them all in one stats table?   Thanks,   Joe
I extracted the _raw field and recieved values looking like - \xB9k?\x93\xE8\xC6\. How could I convert this to readable format?
I appreciate any any assistance with my Rex error. When running this Rex command: | rex "New Logon:\s+Security ID:\s+(?&lt;account&gt;.*)" I receive the following error, "Rex in dashboard says ... See more...
I appreciate any any assistance with my Rex error. When running this Rex command: | rex "New Logon:\s+Security ID:\s+(?&lt;account&gt;.*)" I receive the following error, "Rex in dashboard says missing terminator" Thanks in advance!
Hello Dear Community. For our Enterprise Splunk>, we were thinking about using the SPLUNK DB Connect to ingest structured Data (Comming from the ERP) in SPLUNK. What do you use as a strategy to han... See more...
Hello Dear Community. For our Enterprise Splunk>, we were thinking about using the SPLUNK DB Connect to ingest structured Data (Comming from the ERP) in SPLUNK. What do you use as a strategy to handle: Data Updates Data Deletion Does the SPLUNK DB connect offers something like this out of the box, or should we think about another setup for that ? 
This is the inputs.file:: As you can see they all go to the same directory structure, but the last one is supposed to catch all the logs not beginning with the defined *_xxxxx_*.log so that general l... See more...
This is the inputs.file:: As you can see they all go to the same directory structure, but the last one is supposed to catch all the logs not beginning with the defined *_xxxxx_*.log so that general logs will be stored in Splunk as well.  How can I do this? [monitor:///var/log/containers/*_ctisp1_*.log] index = ctisp1 sourcetype = dks-ctisp1 followSymlink = true [monitor:///var/log/containers/*_ocpprd_*.log] index = ocpprd sourcetype = dks-ocpprd followSymlink = true [monitor:///var/log/containers/*_custconnectp1_*.log] index = custcontp1 sourcetype = custcontp1 followSymlink = true [monitor:///var/log/containers/*_ocpnotifp3_*.log] index = ocpnotifp3 sourcetype = dks-ocpnotifp3 [monitor:///var/log/containers/*_ocpcorep3_*.log] index = ocpcorep3 sourcetype = ocpcorep3 [monitor:///var/log/containers/*_custcon2p3_*.log] index = custcon2p3 sourcetype = custcon2p3 [monitor:///var/log/containers/*_custcon1p3_*.log] index = custcon1p3 sourcetype=custcont1p3 [monitor:///var/log/containers/*_ctisap3_*.log] index = ctisap3 sourcetype = dks-ctisap3 [monitor:///var/log/containers/*_ctisp1_*.log] index = ctisp1 sourcetype = dks-ctisp1 [monitor:///var/log/containers/*_ivrp1_*.log] index = ivrp1 sourcetype = dks-ivrp1 #[monitor:///host/containers/*/[a-f0-9]+-json.log$] #index=dcp #sourcetype=dner-logsiamanti-container-logs #[monitor:///var/lib/docker/containers/*/[a-f0-9]+-json.log$] #index=dcp #sourcetype=diamanti-container-logs [monitor:///var/log/containers/*_ocpnotifp3_*.log] index = ocpnotifp3 sourcetype = ocpnotifp3 [monitor:///var/log/containers/*_ocpcorep3_*.log] index = ocpcorep3 sourcetype = ocpcorep3 [monitor:///var/log/containers/*_custcon2p3_*.log] index = custcon2p3 sourcetype = custcont2p3 [monitor:///var/log/containers/*_igridp2_*.log] index = igridp2 sourcetype = dks-igridp2 ## END of PROD ## Monitor all Diamanti logs [monitor:///var/log/diamanti/.../*.log] index=dcp sourcetype = diamanti-system-logs # Monitor Container logs [monitor:///var/log/containers/*.log] index=dcp sourcetype = diamanti-container-logs
Hello, How  I would know when the last time data came in under any index/sourcetype?  I have one query (see below) is showing me the feed status by index. But, my objective is to find when the last... See more...
Hello, How  I would know when the last time data came in under any index/sourcetype?  I have one query (see below) is showing me the feed status by index. But, my objective is to find when the last time data was fed to  indexes/source types. Any help will be highly appreciated. Thank you! | tstats prestats=t count where earliest=-7d@d latest=-0d@d index=* by index, _time span=1d | timechart useother=false limit=0 span=1d count by index  
Hi  everyone, Thanks for taking time in reading this and providing your knowledge , since i've been struggling a bit with this . I am having an issue with  making a connection from the Endpoint Clo... See more...
Hi  everyone, Thanks for taking time in reading this and providing your knowledge , since i've been struggling a bit with this . I am having an issue with  making a connection from the Endpoint Cloud (Cylance)   to the Splunk  Heavy Forwarder pushing syslogs, for then to be forwarded to the Cloud.  When testing , UDP ports work and the connection is successful, however the logs are still not coming in Splunk Enterprise  and not appearing in Splunk Cloud either. I have configured the Data input, the inputs.conf and the index correctly. Port 514 and 6514 TCP are opened on the security side (Firewalls). My question is , for either port 514 or 6514, is TLS/SSL required by default  to make a connection to these ports ? Or it should connect successfully  if I choose it to not be encrypted?(testing)  Even when trying  with a different random TCP port and the connection is successful, the dashboards in Cylance do not populate. Am I missing a piece of the puzzle ? I've made sure to follow all steps  provided Any help is appreciated. Thanks
Hello, I recently upgraded Splunk Enterprise (and Heavy Forwarder) instances to 8.2.5 and 8.2.6. Both versions (maybe others too) install the Python Upgrade Readiness App 1.0 as default. Then Splun... See more...
Hello, I recently upgraded Splunk Enterprise (and Heavy Forwarder) instances to 8.2.5 and 8.2.6. Both versions (maybe others too) install the Python Upgrade Readiness App 1.0 as default. Then Splunk asked to update the App to 3.1.  Nicely done from Splunk, but after the restart, the Integrity check starts to complain about the missing files of 1.0 version. It is annoying. Is there a way to "teach" Splunk the new version? (I know the check could be completely turned off, but I won't like to lose the information if ever something important changes.)
Hello, I have source files with very inconsistent/ complex events/data structure. I wrote field extraction (inline) codes which are working for most of the cases, however not extracting field as ex... See more...
Hello, I have source files with very inconsistent/ complex events/data structure. I wrote field extraction (inline) codes which are working for most of the cases, however not extracting field as expected for some cases. I included 3 sample events and my inline field extraction codes. Ayn help will be highly appreciated. Thank you! Three Sample Events June 10, 2021 10:41:39:993-0400 - INFO: 439749134|REGT|TEST|SITEMINDER|VALIDATE_ASSERTION|439749134|4deef81s-6455-460b-bf41-c126700d1e9d|2607:fb91:118e:89c9:ad53:43b0:ccce:417c|00||Application data=^CSPProviderName=IDME^givenName=KELLIE^surName=THOMPSON^dateofBirth=1975-04-25^address=21341 E Valley Vista Dr^city=Liberty June 10, 2021 10:41:39:993-0400  EDT 2021^iat= June 10, 2021 10:41:39:993-0400 EDT 2021^AppID=OLA^cspTransactionID=7bdd62bb-966a-426a-9e47-8d2a5a772162 June 10, 2021 10:42:36:991-0400 - INFO: 439741123|REGT|TEST|SITEMINDER|VALIDATE_ASSERTION|439741123|4deef81s-6455-460b-bf41-c126700d1e9d|65.115.214.106|00||Application data=^CSPProviderName=IDME^givenName=KELLIE^surName=THOMPSON^dateofBirth=1975-04-25^address=21341 E Valley Vista Dr^city=Liberty June 10, 2021 10:42:36:991-0400  EDT 2021^iat= June 10, 2021 10:42:36:991-0400 EDT 2021^AppID=OLA^cspTransactionID=7bdd62bb-966a-426a-9e47-8d2a5a772162 May 03, 2021 10:33:50:223-0400 - INFO: NON-8016|IdtokenAuth||authenticate‖lookupClaimVal is null|ERROR|SITEMINDER| QDIAUTH|vp22wsnnn012 |null|null|   My Inline field extraction codes: (Working for first 2 events but not the 3rd event) ^(?P<TIMESTAMPT>.+)\s+\-\s\w+\:\s(?P<USER>.+)\|(?P<TYPE>\w+)\|(?P<SYSTEM>\w+)\|(?P<EVENT>\w+)\|(?P<EVENTID>\w+)\|(?P<SUBJECT>\w+)\|(?P<LESSION>\w+?\-?\w+?\-?\w+?\-?\w+?-\w+?)\|(?P<SRCADDR>.+)\|(?P<STATUS>\w+)\|(?P<MSG>\w*?)\|(?P<DATA>.+)
i have the 2 values let's say expected time= 6:00:00 completion time= 08:32:44 and the expected output should be the difference of the above i.e (expected-completion) in 12 hrs format including ne... See more...
i have the 2 values let's say expected time= 6:00:00 completion time= 08:32:44 and the expected output should be the difference of the above i.e (expected-completion) in 12 hrs format including negative sign for example : output= -2:32:44 (which is the diff between expected and completion)
The percentage of non high priority searches delayed (19%) over the last 24 hours is very high and exceeded the yellow thresholds (10%) on this Splunk instance. Total Searches that were part of thi... See more...
The percentage of non high priority searches delayed (19%) over the last 24 hours is very high and exceeded the yellow thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=5927. Total delayed Searches=1141 Can anyone help me out.
I'm using an HTTP Event Collector to ingest Palo Alto logs from my syslog forwarders. Its using the raw endpoint: 'https://host:8088/services/collector/raw' I'm using the Splunk_TA_paloalto to do s... See more...
I'm using an HTTP Event Collector to ingest Palo Alto logs from my syslog forwarders. Its using the raw endpoint: 'https://host:8088/services/collector/raw' I'm using the Splunk_TA_paloalto to do sourcetyping and field extraction. it also does the time extraction which appears to work. However, my devices are in the pacific timezone and not UTC (don't ask why... I just can't fix it). So I create a local directory and a props.conf file in there that looks like:   -bash-4.2$ pwd /opt/splunk/etc/master-apps/Splunk_TA_paloalto/local -bash-4.2$ cat props.conf [pan_log] TZ = US/Pacific [pan:traffic] TZ = US/Pacific   Then I go to apply the cluster bundle and push the timezone changes to my indexers (this is an indexer cluster). However, traffic still is received in the UTC timezone.  What am I missing? Why won't the indexers correct the time? The Palo app takes in logs using the pan_log sourcetype. It then runs transforms to set the correct sourcetype to pan:traffic or whatever type (I'm testing with just traffic logs at this point). In theory, I think it should work with just the pan_log sourcetype as time extraction happens before transforms. But it isn't working. I also tried blocks for [source::http:myinput] but that did nothing as well. What am I missing? I'm also trying to change the TIME_FORMAT and override datetime.xml. That doesn't work either. Clearly I'm missing something.
Hello, I would like to do a search to filter some result matching my conditions and then use a common ID field to combine result with an other source. Lets say :   SOURCE A :                 ... See more...
Hello, I would like to do a search to filter some result matching my conditions and then use a common ID field to combine result with an other source. Lets say :   SOURCE A :                        field ID  field x field y    SOURCE B :  field ID  field z   I want to do a search with some condition on Source A : Index=A sourcetype=A'  "x=value" "y<=value" and then use a join to get value "z"  for the result that i got from main search.   For now i have something like this :       index=A sourcetype=A' "x=value" "y<=value" | join [ search index=B sourcetype=B' | fields ID | stats count by z         It does not seems to work. 
Hello Splunkers! Initially I added the monitor stanza for all the inputs from various time zones and then when I had a check there was difference _time and the time present in the event and there w... See more...
Hello Splunkers! Initially I added the monitor stanza for all the inputs from various time zones and then when I had a check there was difference _time and the time present in the event and there was a lag by 1 or 2 hours based on that country's time zone and Splunk time zone, then figured out the it is because Splunk looks for a timestamp in the event and parse the data. Now , I need to monitor logs being received from different time zones from various countries and Splunk is in different time zone, can you please drop in your knowledge on this please. When investigated, found that we can add the below as false as per https://docs.splunk.com/Documentation/Splunk/8.2.6/Admin/Propsconf  BREAK_ONLY_BEFORE_DATE = <boolean> DATETIME_CONFIG = NONE   And could see that there are options to define the time zones using TZ. Can anyone help me out please!   Example:  My source: test.csv  SYSTEMDATE,SYSTEMTIME,FAILUREMESSAGE "2022-05-04","12.51.08", The JobA has failed "2022-05-04","13.00.05", The JobB has failed Data reflecting in Splunk UI: Time Event 04/05/2022 12:51:03.000 SYSTEMDATE,SYSTEMTIME,FAILUREMESSAGE 04/05/2022 11:51:08.000 "2022-05-04","14.51.08",The JobA has failed 04/05/2022 12:00:05.000 "2022-05-04","13.00.05",The JobB has failed   Only the below event is reflecting at the current time when the job is triggered from Application end which is the correct one since the below has no timestamp defined. 04/05/2022 12:51:03.000 SYSTEMDATE,SYSTEMTIME,FAILUREMESSAGE   Source time zone: Various Countries like Italy, Romania, Cyprus etc., Destination/Splunk Time Zone: BST   Many thanks! Sarah
We have upgraded splunk version 8.2.6 from 8.0.1. Post upgrade we are observing IOWait status yellow, how can we solve this issue? Before upgrade we didn't observed this issue. Attaching the snap... See more...
We have upgraded splunk version 8.2.6 from 8.0.1. Post upgrade we are observing IOWait status yellow, how can we solve this issue? Before upgrade we didn't observed this issue. Attaching the snapshot for the reference -  
Hi all! I need to store more than 500.000 events in an event index and apply aggregation logic that produces metrics to display on a dashboard. I want to use a metrics index to store these metrics... See more...
Hi all! I need to store more than 500.000 events in an event index and apply aggregation logic that produces metrics to display on a dashboard. I want to use a metrics index to store these metrics so I can improve the performance of the dashboard. The dashboard will have some filters that could generate n! different combinations (one combination per set of filter values).   My concern is that in order to be able to guarantee acceptable response times I will need to generate a metric for every possible combination of the filters, and that just seems excessive.   Is this the only way to achieve what I am looking for?