All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to AT... See more...
index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" | chart count over RampdataSet by Message OUTPUT: RampdataSet Initial message received with below details Letter published correctley to ATM subject Letter published correctley to DMM subject Letter rejected due to: DOUBLE_KEY Letter rejected due to: UNVALID_LOG Letter rejected due to: UNVALID_DATA_APP WAC 10 0 0 10 0 10 WAX 30 15 15 60 15 60 WAM 22 20 20 62 20 62 STC 33 12 12 57 12 57 STX 66 30 0 96 0 96 OTP 20 10 0 30 0 30 TTC 0 5 0 5 0 5 TAN 0 7 0 7 0 7 But we want output as shown below: Total="Letter published correctley to ATM subject" + "Letter published correctley to DMM subject" + "Letter published correctley to DMM subject" + "Letter rejected due to: DOUBLE_KEY" + "Letter rejected due to: UNVALID_LOG" + "Letter rejected due to: UNVALID_DATA_APP" |table "Initial message received with below details"  Total RampdataSet Initial message received with below details Total WAC 10 20 WAX 30 165 WAM 22 184 STC 33 150 STX 66 222 OTP 20 70 TTC 0 15 TAN 0 21
@bowesmana , thank you for ur inputs. We created queries according to our data working now. Thank you once again.
Hi @Ryan.Paredez: Support provided the same solution and it works. Thanks, Roberto
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I t... See more...
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I tried to create a Connection, it fails with a authentication warning.   My solution for now was to edit the JDBC URL manually via the interface and add in the user and password parameters as shown below. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=myusername&password=mypassword The connection now works and proves out that the JDBC driver and credentials are working. This isn't a proper solution though as anyone with administration privileges in DB Connect is able to see the username and password if they edited that connection. Any ideas on how to make a Vertica JDBC connection utilize the Identity set up? The jdbcUrlFormat in the configuration is: jdbc:vertica://<host>:<port>/<database> I was wondering if one solution is a way to reference the Identity here. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=<IdentityUserName>&password=<IdentityPassword> I have tried similar things and that doesn't work either.
Anyone have inputs about that?
Better late than never answering this, right? The part after the @ is a snap-to specifier that causes the search to start at the nearest value in that time unit. For example, if the time is 3:16:2... See more...
Better late than never answering this, right? The part after the @ is a snap-to specifier that causes the search to start at the nearest value in that time unit. For example, if the time is 3:16:20, "-15m@m" will search from 3:01:00, where "-15m" will search from 3:01:20      
holy events tscroggins! that search you provided blew my mind and my instance. I did a 24 search and i have like 10,000 stat results. It is like so over whelming reading all of these I don't ... See more...
holy events tscroggins! that search you provided blew my mind and my instance. I did a 24 search and i have like 10,000 stat results. It is like so over whelming reading all of these I don't even know where to begin. You and your search real MVP though, I did have to take out the host=*splunkdcloud* from the search because I did get zero but after I did that BOOM all the results.
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2... See more...
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2Fexport search command : | inputlookup some_inputlokupfile.csv | rename user as CUSTOMER,  zone as REGION, "product"  as PRODUCT_ID | fields CUSTOMER*, PRODUCT_ID | outputlookup some_example_generated_file.csv.gz override_if_empty=false   when i execute the curl it returns success 200 but file is not created. is it possible to invoke multiline search command using pipe with this or any other search api? the search is dynamic i cant create savedsearch and execute.      
Thanks @PickleRick  your last reply showed me what i was looking for. Data now rolls of after it get's to cold. Not able to search when it gets to cold.
Check also the _internal events from component=DatabaseDirectoryManager around that time (not all events have idx= field). There might be different factors at play, like retention period. You could ... See more...
Check also the _internal events from component=DatabaseDirectoryManager around that time (not all events have idx= field). There might be different factors at play, like retention period. You could check your buckets with dbinspect and see earliest/latest events in them. Anyway, 10 hot buckets is quite a lot.
Yes.... It's showing maximum warm bucket exceeded. Firing async chiller
I would be very cautious about such third-party hosted extensions. Even in case of Splunkbase-originating add-ons not written and supported by Splunk I tend to dig into an app and peek around the co... See more...
I would be very cautious about such third-party hosted extensions. Even in case of Splunkbase-originating add-ons not written and supported by Splunk I tend to dig into an app and peek around the code before installing (and boy, there are some "interesting" ones; luckily I haven't found anything malicious yet but some badly written python code - why not). And this one is not even hosted on Splunkbase which means it didn't even pass appinspect.
Every query should specify an index name before the first pipe. index=aaa source="/var/log/tes1.log" |stats count by index Of course, there must be data in the specified index from the specified so... See more...
Every query should specify an index name before the first pipe. index=aaa source="/var/log/tes1.log" |stats count by index Of course, there must be data in the specified index from the specified source for there to be results.
1. OK. This is _current_ configuration. It would be even better to see the output of splunk btool indexes list mack and | rest /services/data/indexes/mack But the question is what/how did you cha... See more...
1. OK. This is _current_ configuration. It would be even better to see the output of splunk btool indexes list mack and | rest /services/data/indexes/mack But the question is what/how did you change. 2. Did you check the reason for bucket rolling? index=_internal component=BucketMover idx=mack  
HEC on its own doesn't have filtering abilities. You can filter events after receiving them (on any input) using props and transforms but that doesn't change what you're sending over your WAN link. ... See more...
HEC on its own doesn't have filtering abilities. You can filter events after receiving them (on any input) using props and transforms but that doesn't change what you're sending over your WAN link. Your question is fairly generic and we don't have a lot of details about your environment so the answer is also only really generic in nature. Anyway, ingesting events to Splunk using Logstash might prove to be complicated unless you properly prepare your data in Logstash to conform to the format normally ingested by standard Splunk methods (otherwise you'd need to put some work to properly extract fields from such logstash-formatted events). But Logstash should give you ability to filter the data before sending it over HTTP to the HEC input.
For some reason it is not on Splunkbase. I could only find the .SPL files in a git repository at https://github.com/SplunkBAUG/CCA for TA_genesys_cloud-1.0.*.spl EDIT: As PickleRick suggested, whe... See more...
For some reason it is not on Splunkbase. I could only find the .SPL files in a git repository at https://github.com/SplunkBAUG/CCA for TA_genesys_cloud-1.0.*.spl EDIT: As PickleRick suggested, when you get third-party hosted applications like the one linked, you have no protections as would be offered by appinspect. It is highly recommended to check the contents for malicious code before installing it on your machine.
Do you get any events when you use this search? (You can also set the time range to be very large, in case the events from the log source are not in the past 24 hours. Also double-check that the sour... See more...
Do you get any events when you use this search? (You can also set the time range to be very large, in case the events from the log source are not in the past 24 hours. Also double-check that the source path is correct.) index=* source="/var/ltest/test.log"  
Some potential problems with your query are: 1. index=aaa(source="/var/log/testd.log") Does not have a space between the index and source filters 2. the match() functions in your eval env=case() p... See more...
Some potential problems with your query are: 1. index=aaa(source="/var/log/testd.log") Does not have a space between the index and source filters 2. the match() functions in your eval env=case() part should have valid regexes in the second argument of the match function, as in match(<field>,<regex>). Try this: | eval env=case(match(host, ".*10qe.*"), "Test", match(host, ".*10qe.*"), "QA", match(host, ".*10qe.*"), "Prod" ) ref: https://docs.splunk.com/Documentation/SCS/current/SearchReference/ConditionalFunctions
What I can say is I have nowhere near your understanding of Splunk operations.  I do appreciate your input. I am taking my limited understanding of our wholly-UF-to-Indexer environment, and applying... See more...
What I can say is I have nowhere near your understanding of Splunk operations.  I do appreciate your input. I am taking my limited understanding of our wholly-UF-to-Indexer environment, and applying what I know to solve the issue of reducing cloud-to-on-prem traffic over the WAN link from our new SaaS solution.  I keep a very low daily transfer rate (and licensing rate)  in our on-prem environment by blacklisting noise, and whitelisting the key events we want to track.  I have no rights on the source machines, and I cannot install a UF, or anything for that matter.  LogStash is the only option provided - which I assume requires HEC to receive the logs.  I have read that HEC supports white/black listing - which is where my question came from.    
Unfortunately not, as this app is "Not Supported" (as seen on the splunkbase page), so Splunk support can't help you with fixing the app. If you are using Splunk cloud and would like assistance with... See more...
Unfortunately not, as this app is "Not Supported" (as seen on the splunkbase page), so Splunk support can't help you with fixing the app. If you are using Splunk cloud and would like assistance with managing apps on Splunk Cloud, then Splunk support can probably help with getting the app onto your cloud instance.