All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello. Currently, we are trying to get a new SQL query to function within DB Connect.  The SQL query runs against a Postgres database.  Although the SQL query runs fine in batch mode, it is not runn... See more...
Hello. Currently, we are trying to get a new SQL query to function within DB Connect.  The SQL query runs against a Postgres database.  Although the SQL query runs fine in batch mode, it is not running so well when setting up the same in rising mode.  Attached is a screenshot of the error message we are receiving. We tried setting the rising column as "created_at".  Also, we tried adding the following lines to the SQL query: AND created_at > ? ORDER BY created_at ASC But we receive error messages: org.postgresql.util.PSQLException: No value specified for parameter 1. org.postgresql.util.PSQLException: The column index is out of range: 1, number of columns: 0. Can someone suggest the best approach to resolving this? Regards, Max  
I have below log message : basically it is for creating customer record and if we got error the we are retrying for 5 times to get succeed.  log message------------- Error while creating... See more...
I have below log message : basically it is for creating customer record and if we got error the we are retrying for 5 times to get succeed.  log message------------- Error while creating record  for customer id : 94ABGH0048 Error while creating record  for customer id : 94ABGH0048 Successfully created record for customer id : 94ABGH0048 Error while creating record  for customer id : 902SDKK720 Successfully created record for customer id : 945TTFK048 can i get those customer id for which record not created Successfully ? according to above log message output should be  "902SDKK720"
I'm trying to do a field extraction for a hostname field that has some inconsistency with the format. There are two types of formats for the hostname field and they can be in upper or lower, i nee... See more...
I'm trying to do a field extraction for a hostname field that has some inconsistency with the format. There are two types of formats for the hostname field and they can be in upper or lower, i need them in lower. DOMAIN\hostname or hostname.xxxx.xx.xxx Previously, I was replacing what I didn't want in that field , lowering that eval in order to join to a lookup table. What I'm trying to do now is a field extraction from that hostname field check for both formats and then removes DOMAIN\ or the .xxxxx.xx.xxx fqdn format from the end. My REX commands that I'm using: | rex field=hostname "DOMAIN\\\(?P<ComputerName>.*)" |rex field=hostname "^(?<ComputerName>[^\.]+)" Any help would be appreciated!
Im planning to prepare a stacked barchart, My current table looks like this ID       |     Request      |       Response --------------------------------------------- 1       |    Req1           ... See more...
Im planning to prepare a stacked barchart, My current table looks like this ID       |     Request      |       Response --------------------------------------------- 1       |    Req1             |  Success1 2       |    Req1             |  Success32 3       |    Req1             |  4       |    Req2             |  Success41 5       |    Req2             |  Success21 6       |    Req2             | 7       |    Req2             |  Success22 I need to stack Total Req1 - 3, Req2-4 .... vs Successful response  for Req1 -2 (i.e. Success1,Success32 ) and for Req2 - 3 (Success41,Success21,Success22) in table form Req     | Total    |  Success --------------------------------- Req1  | 3        | 2 Req2 |  4      | 3         
Hi, I am trying to add Snort data into Splunk by monitoring barnyard2.alert file using Universal Forwarders.   [monitor:///var/log/barnyard2/barnyard2.alert] sourcetype=snort_unified2 index=snort ... See more...
Hi, I am trying to add Snort data into Splunk by monitoring barnyard2.alert file using Universal Forwarders.   [monitor:///var/log/barnyard2/barnyard2.alert] sourcetype=snort_unified2 index=snort   As explained here https://community.splunk.com/t5/Getting-Data-In/Why-is-Splunk-line-breaking-a-single-IDS-Alert-event-into-two/m-p/287641#M54947 , I added same settings to props.conf(Indexer and SH) but Splunk still ends up breaking each line as a separate event, as shown below: props.conf   [snort_unified2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\[\*\*\] TIME_PREFIX = ^([^\r\n]+[\r\n]+){2} MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %m/%d-%H:%M:%S.%6N category = Network & Security   But when I try these settings by manually adding same input it works just fine. Any ideas on what could be going wrong with props.conf? Thanks, ~Abhi  
I have logs coming from AWS, first, I need to get just a message (which is an event) from the log  Second, in some logs, we have multiple messages inside log events, How I can just show logEvents{... See more...
I have logs coming from AWS, first, I need to get just a message (which is an event) from the log  Second, in some logs, we have multiple messages inside log events, How I can just show logEvents{}.message and segregate the messages from the logs?  Sample log is { [-]     logEvents: [ [-]       { [-]         id: 123456789.....         message: {"Actual Log Event"}         timestamp: 1601177009988       }       { [-]      }     ]     logGroup: CloudTrail     logStream: 1234567890_CloudTrail_us-east-1     messageType: DATA_MESSAGE     owner:1234567890     subscriptionFilters: [ [-]    ]  }
What were the new Splunk platform announcements made at .conf20?
Hi Team, Please note - No Admin privilege to run query on _internal index I want to calculate the amount of data ingested into splunk to evaluate licensing/disk space needs. I have two queries whi... See more...
Hi Team, Please note - No Admin privilege to run query on _internal index I want to calculate the amount of data ingested into splunk to evaluate licensing/disk space needs. I have two queries which I ran for 24 hours and for 30 days and in both I got different outputs with alot of difference. Query 1: index=* | eval size=len(_raw) | eval gbsize=(size/1024/1024/1024) | stats sum(gbsize) by index Query 2: | dbinspect index=*|eval sizeOnDiskGB=(sizeOnDiskMB/1024) | stats sum(rawSize) AS rawTotal, sum(sizeOnDiskGB) AS diskTotalinGB by index|sort -diskTotalinGB I see a difference of around 3-4 GB for some indexes and presume one of these queries or even both of these queries might not be correct. Can anyone kindly suggest which one is the correct one to use. I can't use _internal as mentioned above. Thanks in Advance! Regards, Abhishek Singh
I have an email alert that is set to go out every morning.  I have a bunch of long field names that get cut off randomly in the inline table that is in the body of the email.  For example I have a fi... See more...
I have an email alert that is set to go out every morning.  I have a bunch of long field names that get cut off randomly in the inline table that is in the body of the email.  For example I have a field named "Provisioning Status" in the email alerts it is displayed as "Provision ing Status"  I'd like the field to be displayed as "Provisioning Status"  What can I do?
Hello ,  1) Currently we do have a search head in OnPrem where indexer clusters have been connected to ! 2) Now, we would like to spin up new Splunk SH instance(on Ubuntu)  in AZURE and install  "T... See more...
Hello ,  1) Currently we do have a search head in OnPrem where indexer clusters have been connected to ! 2) Now, we would like to spin up new Splunk SH instance(on Ubuntu)  in AZURE and install  "TrackMe" app on this new Azure SH by syncing with OnPrem SH  Is this a use case anyone already tried or is it possible to apply ? Reason: We'd like to have our own SH in Azure exclusively for our own team and utilize it  
Hello, I have field name: let's call it - "foo" and a value I desire to add to my search - "bar". When I execute a normal query, for example:          index="main" sourcetype="blabla" foo="bar... See more...
Hello, I have field name: let's call it - "foo" and a value I desire to add to my search - "bar". When I execute a normal query, for example:          index="main" sourcetype="blabla" foo="bar"         it won't find anything, although I know there are many events that have the field foo=bar Alternatively, when I execute the following query:         index="main" sourcetype="blabla" foo="*bar"          I get the results I want.  What causes the first search, which should work, to fail? Is that encoding issue? Thanks!
Hello, I'm trying to develop my first Phanto APP using the wizard. The integration is like a ticketing system and I want to implement an ingest action (on_poll). When I select this action and try to... See more...
Hello, I'm trying to develop my first Phanto APP using the wizard. The integration is like a ticketing system and I want to implement an ingest action (on_poll). When I select this action and try to submit the App I get the following error: What am I missing? Thank yo very much. Jose
Hi. I'm quite newbie in Splunk, but I'm trying to find solution to my problem.     index=zt2 (first_search) OR (second_search) |dedup USER_ID |eval xxx=if(searchmatch("first_search"), 1, 0) |eval ... See more...
Hi. I'm quite newbie in Splunk, but I'm trying to find solution to my problem.     index=zt2 (first_search) OR (second_search) |dedup USER_ID |eval xxx=if(searchmatch("first_search"), 1, 0) |eval yyy=if(searchmatch("secont_search"), 1, 0) |stats count(eval(xxx=1)) as "XXX", count(eval(yyy=1)) as "YYY"     I would like to count unique USER_ID per each eval expression, but problem is sometimes this same USER_ID appears in both searches (first_search and second_search), and dedup doesn't work correctly. How do I make dedup affect xxx and yyy separately in eval?
Hi, I wanted to know if it is possible to set up just one credential (i.e. one client id and key from one main tenant) and use it to pull Azure audit as well as subscription resources from other orga... See more...
Hi, I wanted to know if it is possible to set up just one credential (i.e. one client id and key from one main tenant) and use it to pull Azure audit as well as subscription resources from other organization's tenants. I am looking into the multi-tenant application registration way to accomplish it.  I am also using the following Azure related add-ons and wanted to use one single secret key to do the jobs. - Splunk Add-on for Microsoft Cloud Services - Microsoft Azure  Add-on for Splunk - Microsoft Graph Security Add-on for Splunk   I would appreciate any insight and point me to the right direction to do this.
Hi Team,   I have few connections regarding transaction command. I have a series of events. One of the events are mentioned below. 1st event-RAUPPT_PT280916DC0101...sm_mr=PT280916DC0101 2nd event... See more...
Hi Team,   I have few connections regarding transaction command. I have a series of events. One of the events are mentioned below. 1st event-RAUPPT_PT280916DC0101...sm_mr=PT280916DC0101 2nd event- LLAPTU_PT280916DC0101 Questions- 1. Here I want to use transaction command based on PT280916DC010 pattern. Can someone please provide me regex to extract this. PT* will be fixed for every event. 2. As PT280916DC0101 is used in multiple times on 1st event. Will it create any problems ?  
hi I use the search below As you can see, I stat the events by SITE   `CPU` | fields process_cpu_used_percent host | eval slottime = strftime(_time, "%H%M") | where (slottime >= 900 AND slotti... See more...
hi I use the search below As you can see, I stat the events by SITE   `CPU` | fields process_cpu_used_percent host | eval slottime = strftime(_time, "%H%M") | where (slottime >= 900 AND slottime <= 1700) | lookup fo_all HOSTNAME as host output SITE | search SITE=$tok_filtersite|s$ | eval cpu_range=case(process_cpu_used_percent>0 AND process_cpu_used_percent <=20,"0-20", process_cpu_used_percent>20 AND process_cpu_used_percent <=40,"20-40", process_cpu_used_percent>40 AND process_cpu_used_percent <=60,"40-60", process_cpu_used_percent>60 AND process_cpu_used_percent <=80,"60-80", process_cpu_used_percent>80 AND process_cpu_used_percent <=100,"80-100") | stats avg(process_cpu_used_percent) as process_cpu_used_percent by host, _time, cpu_range SITE     Now I need to do a timechart So I add this line   | timechart span=1d dc(host) by cpu_range   But I need to update my timechart by SITE because I use a dropdown list with differents SITE name As there is no SITE field in the timechart line, I lose this field so I am unable to display the timechart by SITE I have tried this but it doesnt works   | timechart span=1d dc(host) by cpu_range SITE     What I have to do for being able to filter the timechart by SITE? Thanks
Hello everybody, Has anyone instrumented a swagger API using AppDynamics?  And if yes which .net version are you running? Thank you very much!
Is Universal Forwarder version 7.1.X compatible with that of Indexer version 8.0.X?
Hi, Can we assign two different transforms.conf to same sourcetype ? For example: [sourcetype] SEDCMD-test= SEDCMD-test1= transfrom-routing=transfromA,transfromB   Tranfroms.conf [tranfromA]... See more...
Hi, Can we assign two different transforms.conf to same sourcetype ? For example: [sourcetype] SEDCMD-test= SEDCMD-test1= transfrom-routing=transfromA,transfromB   Tranfroms.conf [tranfromA] Regex=Error Format=target1 [tranfromA] Regex=Something Format=target1   So can i apply two different  SEDCMD syntax with same sourcetype config in props.conf ? As two SEDCMD are doing different things ?    
Hello everyone, I have a kvstore down on a 4 member search head cluster. I'd like to resync it without impacting too much the users activity. I've got 2 options : - put the instance in detention,... See more...
Hello everyone, I have a kvstore down on a 4 member search head cluster. I'd like to resync it without impacting too much the users activity. I've got 2 options : - put the instance in detention, stop it, clean the store, start it and put it out of detention => but I'm not sure how the new requests will be handled from the load balancer in front of the cluster... - log in to the kvstore captain, and resync all => does that action launch a rolling restart ? does it affect the performance ? I couldn't find info on that. I used mostly : https://docs.splunk.com/Documentation/Splunk/latest/Admin/ResyncKVstore Thanks in advance, Ema