All Posts

Top

All Posts

Hi @Satyapv, you can use eval to categorize your data: <your_search> | eval period=case( _time>now()-300,"Last 5min Vol", _time>now()-600,"Last 10min Vol", _time>now()-900,"Last 15min Vol"... See more...
Hi @Satyapv, you can use eval to categorize your data: <your_search> | eval period=case( _time>now()-300,"Last 5min Vol", _time>now()-600,"Last 10min Vol", _time>now()-900,"Last 15min Vol") | chart count OVER Transaction BY period Ciao. Giuseppe
I tried to copy-paste your chinese text to google translate to understand what you want to accomplish, but I am not sure the translation is correct: "I want to use syslog-ng to input data from the u... See more...
I tried to copy-paste your chinese text to google translate to understand what you want to accomplish, but I am not sure the translation is correct: "I want to use syslog-ng to input data from the universal forwarder to my search head I'm going to use TCP but I don't know what's wrong and I can't display my data in the search header " your syslog-ng seems to be receiving syslog data on port 514 and then delivering the data to 10001/10002 TCP depending on the source IP while doing some transformation. Is 10001 and 10002 where your search heads are? Or are those ports opened by UF? Usually the easiest way to send syslog data to Splunk is by using HEC (HTTP Event Collector), and if you were using that you can simply assign host/source/sourcetype to a specific log message, no need to use separate ports. Also, you are manually getting rid of the priority header (e.g. removing <NNN> in the front), but that would be taken care of by the actual syslog parser in syslog-ng that you disabled via flags(no-parse).  
Hi Team, I need to extract the values of the fields where it has multiple values. So, I used commands like mvzip, mvexpand, mvindex and eval. However the output of my spl query is not matching with... See more...
Hi Team, I need to extract the values of the fields where it has multiple values. So, I used commands like mvzip, mvexpand, mvindex and eval. However the output of my spl query is not matching with the count of the interesting field. Could you please assist on this? Here is my SPL query and output screenshots below. index="xxx" sourcetype="xxx" source=xxx events{}.application="xxx" userExperienceScore=FRUSTRATED | rename userActions{}.application as Application, userActions{}.name as Action, userActions{}.targetUrl as Target_URL, userActions{}.duration as Duration, userActions{}.type as User_Action_Type, userActions{}.apdexCategory as useractions_experience_score | eval x=mvzip(mvzip(Application,Action),Target_URL), y=mvzip(mvzip(Duration,User_Action_Type),useractions_experience_score) | mvexpand x | mvexpand y | dedup x | eval x=split(x,","), y=split(y,",") | eval Application=mvindex(x,0), Action=mvindex(x,1), Target_URL=mvindex(x,2), Duration=mvindex(y,0), User_Action_Type=mvindex(y,1), useractions_experience_score=mvindex(y,2) | eval Duration_in_Mins=Duration/60000 | eval Duration_in_Mins=round(Duration_in_Mins,2) | table _time, Application, Action, Target_URL,Duration_in_Mins,User_Action_Type,useractions_experience_score | sort - _time | search useractions_experience_score=FRUSTRATED | search Application="*" | search Action="*" Query Output with the statistics count:   Expected Count:    
Hello All,   I want to build a splunk query using stats to get count of messages for last 5 min, last 10min and last 15min.Something like below. Kindly let me know how below can be achieved?    T... See more...
Hello All,   I want to build a splunk query using stats to get count of messages for last 5 min, last 10min and last 15min.Something like below. Kindly let me know how below can be achieved?    Transaction       Last 5min Vol        Last 10min Vol       Last 15min Vol A B C
Hi @ITWhisperer , Could you please provide an update on this request? Thank you,
so we cannot load index dynamically from log files, correct?
Hello Splunkers!! I want to work Splunk on https. I am using windows server. How to generate certificate in Splunk and Trustore in some easy steps available? I followed below document but not givi... See more...
Hello Splunkers!! I want to work Splunk on https. I am using windows server. How to generate certificate in Splunk and Trustore in some easy steps available? I followed below document but not giving any good results.   https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoself-signcertificates  
index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to AT... See more...
index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" | chart count over RampdataSet by Message OUTPUT: RampdataSet Initial message received with below details Letter published correctley to ATM subject Letter published correctley to DMM subject Letter rejected due to: DOUBLE_KEY Letter rejected due to: UNVALID_LOG Letter rejected due to: UNVALID_DATA_APP WAC 10 0 0 10 0 10 WAX 30 15 15 60 15 60 WAM 22 20 20 62 20 62 STC 33 12 12 57 12 57 STX 66 30 0 96 0 96 OTP 20 10 0 30 0 30 TTC 0 5 0 5 0 5 TAN 0 7 0 7 0 7 But we want output as shown below: Total="Letter published correctley to ATM subject" + "Letter published correctley to DMM subject" + "Letter published correctley to DMM subject" + "Letter rejected due to: DOUBLE_KEY" + "Letter rejected due to: UNVALID_LOG" + "Letter rejected due to: UNVALID_DATA_APP" |table "Initial message received with below details"  Total RampdataSet Initial message received with below details Total WAC 10 20 WAX 30 165 WAM 22 184 STC 33 150 STX 66 222 OTP 20 70 TTC 0 15 TAN 0 21
@bowesmana , thank you for ur inputs. We created queries according to our data working now. Thank you once again.
Hi @Ryan.Paredez: Support provided the same solution and it works. Thanks, Roberto
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I t... See more...
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I tried to create a Connection, it fails with a authentication warning.   My solution for now was to edit the JDBC URL manually via the interface and add in the user and password parameters as shown below. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=myusername&password=mypassword The connection now works and proves out that the JDBC driver and credentials are working. This isn't a proper solution though as anyone with administration privileges in DB Connect is able to see the username and password if they edited that connection. Any ideas on how to make a Vertica JDBC connection utilize the Identity set up? The jdbcUrlFormat in the configuration is: jdbc:vertica://<host>:<port>/<database> I was wondering if one solution is a way to reference the Identity here. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=<IdentityUserName>&password=<IdentityPassword> I have tried similar things and that doesn't work either.
Anyone have inputs about that?
Better late than never answering this, right? The part after the @ is a snap-to specifier that causes the search to start at the nearest value in that time unit. For example, if the time is 3:16:2... See more...
Better late than never answering this, right? The part after the @ is a snap-to specifier that causes the search to start at the nearest value in that time unit. For example, if the time is 3:16:20, "-15m@m" will search from 3:01:00, where "-15m" will search from 3:01:20      
holy events tscroggins! that search you provided blew my mind and my instance. I did a 24 search and i have like 10,000 stat results. It is like so over whelming reading all of these I don't ... See more...
holy events tscroggins! that search you provided blew my mind and my instance. I did a 24 search and i have like 10,000 stat results. It is like so over whelming reading all of these I don't even know where to begin. You and your search real MVP though, I did have to take out the host=*splunkdcloud* from the search because I did get zero but after I did that BOOM all the results.
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2... See more...
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2Fexport search command : | inputlookup some_inputlokupfile.csv | rename user as CUSTOMER,  zone as REGION, "product"  as PRODUCT_ID | fields CUSTOMER*, PRODUCT_ID | outputlookup some_example_generated_file.csv.gz override_if_empty=false   when i execute the curl it returns success 200 but file is not created. is it possible to invoke multiline search command using pipe with this or any other search api? the search is dynamic i cant create savedsearch and execute.      
Thanks @PickleRick  your last reply showed me what i was looking for. Data now rolls of after it get's to cold. Not able to search when it gets to cold.
Check also the _internal events from component=DatabaseDirectoryManager around that time (not all events have idx= field). There might be different factors at play, like retention period. You could ... See more...
Check also the _internal events from component=DatabaseDirectoryManager around that time (not all events have idx= field). There might be different factors at play, like retention period. You could check your buckets with dbinspect and see earliest/latest events in them. Anyway, 10 hot buckets is quite a lot.
Yes.... It's showing maximum warm bucket exceeded. Firing async chiller
I would be very cautious about such third-party hosted extensions. Even in case of Splunkbase-originating add-ons not written and supported by Splunk I tend to dig into an app and peek around the co... See more...
I would be very cautious about such third-party hosted extensions. Even in case of Splunkbase-originating add-ons not written and supported by Splunk I tend to dig into an app and peek around the code before installing (and boy, there are some "interesting" ones; luckily I haven't found anything malicious yet but some badly written python code - why not). And this one is not even hosted on Splunkbase which means it didn't even pass appinspect.
Every query should specify an index name before the first pipe. index=aaa source="/var/log/tes1.log" |stats count by index Of course, there must be data in the specified index from the specified so... See more...
Every query should specify an index name before the first pipe. index=aaa source="/var/log/tes1.log" |stats count by index Of course, there must be data in the specified index from the specified source for there to be results.