All Topics

Top

All Topics

I am new to Splunk query  I need to capture the  filed value of tn "Subscription_S04_LookupInvoiceStatus" and Response data(Highlighted bold in the below XML file) for the corresponding "tn" filed ... See more...
I am new to Splunk query  I need to capture the  filed value of tn "Subscription_S04_LookupInvoiceStatus" and Response data(Highlighted bold in the below XML file) for the corresponding "tn" filed value and display under statistics. "Subscription_S04_LookupInvoiceStatus" value present multiple times in the XML file   and Response data for the corresponding "tn" filed value, I want to query for unique one(Remove duplicates) I tried the below query, but its not pulling the response Data. Kindly help me  it would be great help   "Query I tried: index=perf-*** host=****** source=/home/JenkinsSlave/JenkinsSlaveDir/workspace/*/project/logs/*SamplerErrors.xml | eval tn=replace(tn,"\d{1}\d+","") | rex d"<responseData class=\"java\.lang\.String\">?{(?P<Response_Data1>[\w\D]+)<\/java.net.URL>" | dedup tn | stats count by tn,Response_Data1 |rex field=Response_Data1 max_match=2 "<responseData class=\"java\.lang\.String\">?{(?P<Response_Data2>[\w\D]+)<\/java.net.URL>" | eval Response_Data2=if(mvcount(Response_Data2)=2, mvindex(Response_Data2, 2), Response_Data2) XML Data: -------------------- </sample> <sample t="48" lt="0" ts="1662725857475" s="true" lb="HealthCheck_Subscription_S04_LookupInvoiceStatus_T01_LookupInvoiceStatus" rc="200" rm="Number of samples in transaction : 1, number of failing samples : 0" tn="Subscription_S04_LookupInvoiceStatus 1-1" dt="" by="465" ng="1" na="1"> <httpSample t="48" lt="48" ts="1662725858479" s="true" lb="EDI2" rc="200" rm="OK" tn="Subscription_S04_LookupInvoiceStatus 1-1" dt="text" by="465" ng="1" na="1"> <responseHeader class="java.lang.String">HTTP/1.1 200 OK Date: Fri, 09 Sep 2022 12:17:38 GMT Content-Type: application/json; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Content-Encoding: gzip </responseHeader> <requestHeader class="java.lang.String">Connection: keep-alive content-type: application/json Authorization: Bearer test_***** Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 perftest: true Content-Length: 40 Host: stage-subscription.teslamotors.com X-LocalAddress: /10.33.51.205 </requestHeader> <responseData class="java.lang.String">{"orderRefId":"****","productName":"***","country":"NL","invoiceInformation":[{"uniqueOrderId":"****","amount":**,"currency":null,"invoiceStatus":"**","dueDate":null,"cycleStartDate":"**","cycleEndDate":"*****","paymentDate":"****"}]}</responseData> <responseFile class="java.lang.String"/> <cookies class="java.lang.String"/> <method class="java.lang.String">POST</method> <queryString class="java.lang.String">{ "OrderRefId": "*****"}</queryString>
We have changed how we do things, intending to move to smartcache shortly.   We have a lot of frozen data we would like to put back into circulation in anticipation of making it readily available... See more...
We have changed how we do things, intending to move to smartcache shortly.   We have a lot of frozen data we would like to put back into circulation in anticipation of making it readily available to be retrieved when required. I understand we can utilise a frozen folder. However, would like to pull it back into our cache pre the move to smartcache. Allowing Splunk to manage it via the smartcache storage. Is there a way\method that this can be achieved?  
From an IP  send logs to syslog server via tcp/udp 1503 and Universal forwarder install on this server Need to send log on Splunk server under index="ibmguardium" from syslog server. can someone ass... See more...
From an IP  send logs to syslog server via tcp/udp 1503 and Universal forwarder install on this server Need to send log on Splunk server under index="ibmguardium" from syslog server. can someone assist please 
Hello, How I would assign one source type to two different indexes, one after another. As an example: I assigned sourcetype =win:syslog to index=winsyslog_test on January/20/2022. Now I need to ass... See more...
Hello, How I would assign one source type to two different indexes, one after another. As an example: I assigned sourcetype =win:syslog to index=winsyslog_test on January/20/2022. Now I need to assign sourcetype=win:syslog to index=win_syslog. I have 2 issues: 1. How I would  assign   sourcetype=win:syslog to index=winsyslog_test and index=win_syslog under this condition? 2. If I assign sourcetype=win:syslog to index=win_syslog, all of the events sourcetype=win:syslog (with index=winsyslog_test) have since January/20/2022 also show up under index=win_syslog sourcetype=win:syslog? Any help will be highly appreciated. Thank you! 
I have my splunk integrated with snow addon for incident creation, when set to real time receiving unknown sid in the alerts history and no tickets are getting generated but when set to 1 min schedul... See more...
I have my splunk integrated with snow addon for incident creation, when set to real time receiving unknown sid in the alerts history and no tickets are getting generated but when set to 1 min scheduled window it's working fine with no issues. could someone help me to understand what's the issue here please.
Hello, I'm a newbie in splunk and I'd like to draw a pie chart where the total value is taken from a csv sheet. e.g. X = 2 & Y = 10 and I'd like the pie chart total to take the value of (Y) and (... See more...
Hello, I'm a newbie in splunk and I'd like to draw a pie chart where the total value is taken from a csv sheet. e.g. X = 2 & Y = 10 and I'd like the pie chart total to take the value of (Y) and (X) to be part of it with its percentage. So, total pie chart value is 100% where the 100% represents the $value of Y and X represents 20% of it. The best query I reached is (index="A" source="*B*"  | chart values(X) over Y | transpose) however the chart represents the percentage of X & Y as if the total value of the pie chart is (X+Y) which is not the case I want.
i have this search:       SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C... See more...
i have this search:       SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > ? ORDER BY C_CREATED_DATE_TIME ASC     and i want to add      and       clause to the where section. for some reason it doesn't work. i tried:     SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > ? and C_CREATED_DATE_TIME > convert(varchar,'2022-08-31',121) ORDER BY C_CREATED_DATE_TIME ASC     and i tried:     SELECT C_ID, REPLACE(REPLACE(REPLACE(REPLACE(C_XML,'"',''''), CHAR(13), ''), CHAR(10), ''),' ','') as C_XML, C_TENANT_ID, C_MESSAGE_PRIORITY, C_CREATED_DATE_TIME, C_WAS_PROCESSED, C_LOGICAL_ID FROM "ElbitErrorHandlingDB"."dbo"."COR_INBOX_ENTRY" WHERE C_CREATED_DATE_TIME > convert(varchar,'2022-08-31',121) AND C_CREATED_DATE_TIME > ? ORDER BY C_CREATED_DATE_TIME ASC     both didn't worked. any advice?
Hi, Just curios if this is possible as I have interesting challenge. So, I have extracted fields, key=value id0=0000, id1=1111, id2=2222,inN=NNNN,zone0=zone0,zone1=zone1,zone2=zone2,zoneN=zoneN... See more...
Hi, Just curios if this is possible as I have interesting challenge. So, I have extracted fields, key=value id0=0000, id1=1111, id2=2222,inN=NNNN,zone0=zone0,zone1=zone1,zone2=zone2,zoneN=zoneN Now I want to create new field that is like this just number AutoIncrements | eval example0 = id0 + " location:" + zone0 My challenge is, how to make that more "automatic" as I don't know the number "N" in event and want to automate this new field so for every exampleN i have the same eval example. I mean it'll be a little more complicated as I'll create some case statement in eval but inital challange is how to automate it on simpler just string scenario.
Is there a way to track when an index stopped bring in data? I just noticed that one of our indexes is no longer bring data into Splunk.  Is there a command where I can find last known time? I have... See more...
Is there a way to track when an index stopped bring in data? I just noticed that one of our indexes is no longer bring data into Splunk.  Is there a command where I can find last known time? I have been able to track it manually back to day of when it stopped receiving data. 
We have the outliers SPL and visualizations  work, but I don't know how to create the alerts themselves?  How do we go about it?  We can use sendemail, but it won't be captured within _audit, which... See more...
We have the outliers SPL and visualizations  work, but I don't know how to create the alerts themselves?  How do we go about it?  We can use sendemail, but it won't be captured within _audit, which is a shame.
After upgrade of DB Connect from version 3.8 to 3.10, it won't accept any connection that was previously set. Everything works fine before the upgrade but now my outputs and inputs can't load. When ... See more...
After upgrade of DB Connect from version 3.8 to 3.10, it won't accept any connection that was previously set. Everything works fine before the upgrade but now my outputs and inputs can't load. When I try choosing a connection table, it displays the error "invalid database connection"   I also noticed the new DBX version has a keystore tab on the settings menu. (This is new and not on the previous version 3.8) I have necessary drivers installed; Splunk_JDBC_mssql version 1.1 and JRE version 11.0 Can someone assist me with what I'm missing for my connections to work?
Hi, I have a log that will dynamically add "fields" to log record based on some logic. It's syslog begging + payload that looks like (example) Sep 10 16:52:07 11.11.11.11 Sep 10 16:52:07 proces... See more...
Hi, I have a log that will dynamically add "fields" to log record based on some logic. It's syslog begging + payload that looks like (example) Sep 10 16:52:07 11.11.11.11 Sep 10 16:52:07 process[111]: app=test&key0=value0&key1=value1&key2=key...&keyN=valueN how to automatically/dynamically extract all keyN to fields.
while opening into search head server get error as : View more information about your request (request ID = 631c96cc4c7fa17c4faf10) in Search This page was linked to from https://inblrshsplnk07.si... See more...
while opening into search head server get error as : View more information about your request (request ID = 631c96cc4c7fa17c4faf10) in Search This page was linked to from https://inblrshsplnk07.siemens.net/. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.
When configured to permissive mode, UI requests hitting the Splunk UI without the REMOTE_USER header are directed to a go-away page, saying not authorized.  This behavior is correct for strict mode, ... See more...
When configured to permissive mode, UI requests hitting the Splunk UI without the REMOTE_USER header are directed to a go-away page, saying not authorized.  This behavior is correct for strict mode, but not for permissive mode. This is kinda unfortunate for any use case where you want SSO to enable certain kinds of automatic access but stlil enable users to log in the old fashioned way.   My use case is automated UI testing, which is obviously a minority, but will affect all splunk app developers.  
I am not sure how to word this so I'm going to bring it as an example. We have 3 firewalls that send logs for ingestion. Each FW is for a separate purpose so they are configured slightly differentl... See more...
I am not sure how to word this so I'm going to bring it as an example. We have 3 firewalls that send logs for ingestion. Each FW is for a separate purpose so they are configured slightly differently. Each appliance has their logs ingested into Splunk to go into separate indexes (due to their purposes and location in the logical topology). Within each firewall, there are of course field values that are helpful to sort and do stats on. Now my question: I am still learning spl, reading through Exploring Splunk by Carasso, so I don't have a full understanding in all the nuances. In one search string, can I reference each index, create a table for each index, which further divides and displays that index into categories like firewall action as one field, type of request as another field, and then provide stat counts on each of those categories (how many of field 1, field 2, etc) and then also provide a total bandwidth displayed (bytes)....all this within the same table. Index FW1             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total Index FW2             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total Index FW3             stat count ------  FW Action ---- (nested sort) Type of Request ---- bytes total   Can I do all that in one search string, or do i have to create a search for each index?
As the question says: can a Universal Forwarder report an internal IP? It can clearly report the external IP, but that's not useful to me.
Introduction to Apache Camel Apache Camel is a popular ETL framework with many components. Camel has a long and storied history and is just awesome all around. There is some cool theory behind how ... See more...
Introduction to Apache Camel Apache Camel is a popular ETL framework with many components. Camel has a long and storied history and is just awesome all around. There is some cool theory behind how it all works: data flows from producers and goes to consumers. Both producers and consumers can be represented as endpoints. The data is manipulated by a set of constructs named enterprise integration patterns. You can buy the reference book about EIPs here, and here is a succinct doc on it on the Camel website. What you really need to take away from this introduction is that if your component is in the list, you can safely assume you can route all sorts of data to it. When I joined Splunk two years ago, I played around one night and sent the Camel team a pull request to add Splunk HEC support. I have to say, my code contribution wasn’t all that good. Thankfully, someone else came around after me and fixed it so it’s just right now. We’re going to use this component today. When you configure Camel, you define routes by which data will transit. In our case, we want to send data to Splunk. To make this interesting, we can reprise the example from Jose where he used the power of Splunk to read and analyze github stats. In our case, we would like to read github releases. Sending Splunk Data In Github releases are available as an Atom feed. I happen to run a Geth node, an Ethereum client, for my day job, so I’d like to know when a new release is available. First, to refer to the Atom feed, I type the endpoint of the feed: https://github.com/ethereum/go-ethereum/releases.atom Since this is going to use the Atom component, I need to point that out by adding a scheme:     atom:https://github.com/ethereum/go-ethereum/releases.atom     The Atom component offers me to split all entries so I don’t read 10 releases at once. This is done by adding splitEntries=true to the query string of the endpoint:     atom:https://github.com/ethereum/go-ethereum/releases.atom?splitEntries=true     Now, we want to send data to Splunk. The endpoint for Splunk is:     splunk-hec:splunk-host/splunk-token?skipTlsVerify=false&index=myindex&source=myindex&sourcetype=mysourcetype&bodyOnly=true     A few notes: Notice the splunk-hec URI scheme. The splunk-host is the host name of the instance. splunk-token is your HEC token. skipTlsVerify sets whether we should skip the verification of the TLS handshake. index, source, sourcetype relate to how your data will be ingested. Those 3 are optional, since they can be defaulted in Splunk itself. bodyOnly is the fix that was contributed to the component, and removes the extra envelope of the Camel message to send the exact message that is present in the body. Apache Camel is easy to configure from there. You just set a “from” and a “to”. It looks like this:     from("atom:https://github.com/ethereum/go-ethereum/releases.atom?splitEntries=true").to("splunk-hec:splunk-host/splunk-token?skipTlsVerify=false&index=myindex&source=myindex&sourcetype=mysourcetype&bodyOnly=true");     This gets you to a working route. Running an Apache Camel Program as a Native Program Two years later I had another few hours to play with Apache Camel, and I tried to see how far I could take the Camel pipeline to make it as easy as possible to run a tool to ingest multiple Atom feeds. I took an unconventional approach, I decided to write my code in Kotlin, allow configuration as a TOML file, and best of all, compile all this to run as a native program. The result is on Github. Please take this for what it is, an experiment to test out a novel packaging method producing a single binary. The program itself is 75 megabytes once compiled down to a native executable. It takes a unique argument of the path of a configuration file. For fun, I created a docker compose environment in the repository you can run to see it in action. It runs Splunk next to it and sends a few chosen Atom feeds of github releases to it: One More Thing The issue with a simple route (from->to) is that we don’t preserve the state of what was consumed. If we stop and restart our route, we might send data to Splunk twice. Camel defines the concept of an idempotent consumer, and offers several implementations. Being a hopeless romantic (I just love how it scales), I have used their Infinispan flavor, tied to a local RocksDB database, to use as a cache here. To set up Infinispan in Kotlin, I simply initiate the cache like so:     val cacheConfig = ConfigurationBuilder().persistence().addStore(RocksDBStoreConfigurationBuilder::class.java) .location(path.resolve("idempotent").toAbsolutePath().toString()) .expiredLocation(path.resolve("idempotent-expired").toAbsolutePath().toString()).build() val infinispanConfig = InfinispanEmbeddedConfiguration() val cacheManager = DefaultCacheManager(GlobalConfigurationBuilder().defaultCacheName("api-scraper").build(), cacheConfig, true) infinispanConfig.cacheContainer = cacheManager val repo = InfinispanEmbeddedIdempotentRepository("atom") repo.configuration = infinispanConfig     Then in-between from and to, we add:     .idempotentConsumer(simple("\${body.id.toASCIIString}"), repo)     simple("\${body.id.toASCIIString}") becomes the key of the entry in the cache. See it in action in a few lines of this Main class. I hope you enjoyed reading this blog post. If you have any questions or comments about the code, feel free to star and open issues on the repository. If you are interested in learning more about Apache Camel, please visit their website and join their mailing lists to get acquainted with their community.  — Antoine Toulme,  Senior Engineering Manager, Blockchain & DLT
I'm working with the "Jira Issue Input Add-on" and in Jira we have created custom fields.  Splunk ingests issues and the custom field data looks like this   customfield_10101: SA-1017 customfield... See more...
I'm working with the "Jira Issue Input Add-on" and in Jira we have created custom fields.  Splunk ingests issues and the custom field data looks like this   customfield_10101: SA-1017 customfield_10107: 3 customfield_25402: [ [+] ] customfield_25426: [ [+] ] customfield_25427: { [+] }   There are 1,049 custom fields.  I would like to use the names for the custom fields and have created a csv file with this   customfield_custom_field_number,custom_field_name customfield_10000,Request participants ... customfield_27904,Target Date   I'm trying to avoid having all the renames in props.conf.  Is there any way of taking the field name in an event and using the lookup renaming it to what is found in the lookup?
Can Splunk Enterprise 8.2.6 be upgaded to 9.1.0?
Hi, I have similar authentication logs as below: LOG 1: 03362 auth: ST1-CMDR: User 'my-global\admin' logged in from IP1 to WEB_UI session   LOG2: %%10WEB/4/WEBOPT_LOGIN_SUC(l): admin logged in ... See more...
Hi, I have similar authentication logs as below: LOG 1: 03362 auth: ST1-CMDR: User 'my-global\admin' logged in from IP1 to WEB_UI session   LOG2: %%10WEB/4/WEBOPT_LOGIN_SUC(l): admin logged in from IP2   The regex below works only for event LOG2: (?<user>\w+)\slogged\sin\sfrom\s(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})   Probably it doesn't match special characters, any idea to solve that? Thank you in advance!