All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have 2 timecharts, when each one do visualization to the same logs, but the data showed in each timechart is different. in each chart i can do zoom in with the mouse, but i want that if i do ... See more...
Hi, I have 2 timecharts, when each one do visualization to the same logs, but the data showed in each timechart is different. in each chart i can do zoom in with the mouse, but i want that if i do this in some chart, it will affect and do the same in the other.  the result is that i look on the same logs on the same time and have the 2 visualizations that i need.
Hi all, Using Splunk cloud I'm trying to look up the time difference between when a message is received from a sender and was delivered to a recipient. I have a lookup which has all the message id... See more...
Hi all, Using Splunk cloud I'm trying to look up the time difference between when a message is received from a sender and was delivered to a recipient. I have a lookup which has all the message ids totaling about 400k entries. For each message id, I'm interested in finding how long it took for the server to process the message.  Splunk seems to truncate the subsearch to 10000.  Here is what was noticed for the search below: Subsearch produced 423340 results, truncating to maxout 10000. index="stage" (host="msgsrv*" source="/var/log/messaging/msg.log" [|inputlookup sept-messages | fields id ] ( ( event=msg_rcvd AND "tag=""body""") OR ( event=msg_sent AND "tag=""body""") OR (event=msg_sent AND "tag=""result""") ) ) |reverse |eval msg_id = id |eval msg_rcvd_time=if(event == "msg_rcvd", _time, 999999999999.999) |eval client_out_time=if(event == "msg_sent", _time,999999999999.999) |stats values(msg_rcvd_time) AS ins values(msg_sent_time) AS outs values(_time) AS times values(_raw) as raws values(to_user) AS to_users values(from_user) AS from_users by msg_id, to_user |eval first_msg_rcvd_time = mvindex(mvsort(ins), 0) |eval first_msg_sent_time = mvindex(mvsort(outs), 0) |eval delta = first_msg_sent_time - first_msg_rcvd_time How could the large lookup be processed and any suggestions for improving the above query?
Hello Experts,   search.. |search "json attribute" |stats sum(latest("_attributes.xxx.total")) by servername |append [search ... |search "json attribute" |stats sum(latest("_attributes.yyy.total"))... See more...
Hello Experts,   search.. |search "json attribute" |stats sum(latest("_attributes.xxx.total")) by servername |append [search ... |search "json attribute" |stats sum(latest("_attributes.yyy.total")) by servername]   The above search returns rows in the following format:- servername --- sum(latest("_attributes.xxx.total")) --  sum(latest("_attributes.yyy.total")) But i want them to be displayed as follows:-- servername --- sum(latest("_attributes.Both_xxx_yyy.total"))  Thank you.
Hi, Can someone please help me here: To fetch value = private and operation= OVERRIDE using rex command? I tried to fetch using below command:  |rex max_match=0 field=_raw "IP BLOCK TYPE\",value=... See more...
Hi, Can someone please help me here: To fetch value = private and operation= OVERRIDE using rex command? I tried to fetch using below command:  |rex max_match=0 field=_raw "IP BLOCK TYPE\",value=\"(?<IP_Block_Type>.*?)\s*(\w*+)\]"| eval IP_Block_Type= substr(IP_Block_Type, 1, len(IP_Block_Type)-1)   |rex max_match=0 field=_raw "IP BLOCK TYPE\",operation=\"(?<IP_Block_Type>.*?)\s*(\w*+)\]"| eval IP_Block_Type= substr(IP_Block_Type, 1, len(IP_Block_Type)-1)   [name="IP BLOCK TYPE",value="Private",operation="OVERRIDE"]  
Hi Everyone, I have one requirement I have multiple dashboards so I have created one multiselect drop-down with hardcoded values: <fieldset submitButton="false" autoRun="true"> <input type="multi... See more...
Hi Everyone, I have one requirement I have multiple dashboards so I have created one multiselect drop-down with hardcoded values: <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="OrgName" searchWhenChanged="true"> <label>DashboardName</label> <choice value="All">All </choice> <choice value="events">events</choice> <choice value="abc">abc</choice> <choice value="marketingforce">marketingforce</choice> <prefix>(</prefix> <valuePrefix>DashboardName ="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <suffix>)</suffix> <initialValue>All</initialValue> <default>All</default> </input> </fieldset>   My second panel is in tabular form(Statics table) which consists of multiple rows for different dashboards. I want when I select events from drop-down then only events rows showed appear in that panel and rest rows should hide . How can I achieve that. Please let me know if any other information is needed. Thanks in advance.
Hi all, So, we have a Splunk Enterprise running on an ec2 instance and we want to ssl secure the splunk web url. So for this we first got a name domain (eg."name.domain.com" ) from aws route53 an... See more...
Hi all, So, we have a Splunk Enterprise running on an ec2 instance and we want to ssl secure the splunk web url. So for this we first got a name domain (eg."name.domain.com" ) from aws route53 and then got an aws  ssl certificate for this domain.  Next, we created a load balancer with  a target group(target-1) HTTP at port 80 to the ec2 instance. And a second target group(target-2) HTTPS at port 443  to the same ec2 instance but without the load balancer. Then, we have two listeners to the load balancer: 1. HTTPS 443 listener with the aws ssl certificate that we got from aws. This listener is forwarding to the  target group target-1 2. HTTP  8000 listener . This listener is redirecting to HTTPS://#{host}:443/#{path}?#{query}. Then on my ec2 instance I have an apache httpd server. And I made the following changes in the  apache and splunk config files:   /etc/httpd/conf/httpd.conf  Listen 80 <VirtualHost *:80> ServerName name.domain.com Redirect permanent / https://name.domain.com/ </VirtualHost> /etc/httpd/conf.d/ssl.conf  LoadModule ssl_module modules/mod_ssl.so Listen 443 <VirtualHost *:443> ServerName name.domain.com ProxyPass / https://127.0.0.1:8000/ ProxyPassReverse / https://127.0.0.1:8000/ SSLEngine On SSLCertificateFile /etc/httpd/ssl/name.domain.crt SSLCertificateKeyFile /etc/httpd/ssl/name.domain.key SSLProxyEngine on SSLProxyVerify none </VirtualHost> /opt/splunk/etc/system/local/web.conf [settings] enableSplunkWebSSL = 1 privKeyPath = /etc/httpd/ssl/name.domain.key caCertPath = /etc/httpd/ssl/name.domain.crt With these changes and modifications , when I type url  "https://name.domain.com" on the web browser it opens the apache test page and the page is secured . But when I type https://name.domain.com:8000 to open splunk web, it shows in browser " This site cant provide a secure connection".   I think the problem is that, in the apache config files it cannot locate the SSLCertificateFile as I just provided the domain.name.crt there thinking it will somehow get the ssl crt and key files from the browser but it doesnt. Maybe , I have to put manually the ssl files in the specified locations. But, for aws created ssl certificates there is no way to export the .crt and .key files for ssl. So, what do I use for the fields SSLCertificateFile and SSLCertificatekey in the apache config files???? Please help????
Hello, In my lookup I have the following data: _time='2020-10-21 15:00' usage='1' host='A' _time='2020-10-26 15:00' usage='2' host='B' _time='2020-10-21 16:00' usage='3' host='A' _time='2020-10-... See more...
Hello, In my lookup I have the following data: _time='2020-10-21 15:00' usage='1' host='A' _time='2020-10-26 15:00' usage='2' host='B' _time='2020-10-21 16:00' usage='3' host='A' _time='2020-10-23 18:00' usage='4' host='A' _time='2020-10-24 15:00' usage='2' host='B' I want to get only the earliest entry for each host so it will look like this: host='A' _time='2020-10-21 15:00' usage='1' host='B' _time='2020-10-24 15:00' usage='2' How can I do it?
Hi all,  In this example I want to use existing field Request64 from index index_new and decode it on ingest-time to produce RequestD base64 decoded field in same index. Can you please suggest if fo... See more...
Hi all,  In this example I want to use existing field Request64 from index index_new and decode it on ingest-time to produce RequestD base64 decoded field in same index. Can you please suggest if following config is valid for this operation: $ inputs.conf [monitor:///$SPLUNK_DB/index_new/db] index=index_new sourcetype= ST_NEW_DATA $ cat props.conf [ST_NEW_DATA] TRANSFORMS-b64 = Request_t $ transforms.conf [Request_t] INGEST_EVAL = RequestD=base64 field=Request64 action=decode mode=replace suppress_error=True $ fields.conf [RequestD] INDEXED = True This is macro to decode data: https://splunkbase.splunk.com/app/1922/#/details This is dump of index metadata to find monitor path: | rest /services/data/indexes coldPath $SPLUNK_DB/index_new/colddb coldPath_expanded /opt/org/splunk_data/splunk/index_new/colddb homePath $SPLUNK_DB/index_new/db homePath_expanded /opt/org/splunk_data/splunk/index_new/db id https://127.0.0.1:8089/servicesNS/nobody/search/data/indexes/index_new summaryHomePath_expanded /opt/org/splunk_data/splunk/index_new/summary thawedPath $SPLUNK_DB/index_new/thaweddb tstatsHomePath_expanded /opt/org/splunk_data/splunk/index_new/datamodel_summary
I have an annoying log that I am trying to extract data from and I am lost and don't know where to go from here.  What I am trying to extract is as follows   2020-10-02 17:01:32,360 INFO: User... See more...
I have an annoying log that I am trying to extract data from and I am lost and don't know where to go from here.  What I am trying to extract is as follows   2020-10-02 17:01:32,360 INFO: User.val (value, value2, value3, value4): User not found. Parameters: userId: 1; requester: userVO: userId: 66666 status: V username: joe.blogs@someplace.com authenticationMethod: PASSWORD emailAddress: joe.blogs@someplace.com firstName: Joe middleName: lastName: Bloggs displayName: Joe Blogs createdBy: 123456 dateCreated: 2019-07-02 17:17:29.68 lastUpdatedBy: 66666 dateLastUpdated: 2020-07-20 16:49:30.409 signupCompletedDate: 2019-07-03 14:24:52.389 lastSignInDate: 2020-10-01 19:04:21.787 title: Person company: Somewhere addressLine1: 1 This Street addressLine2: city: Somewhere state: ST1 zipCode: 1234 country: ThatCountry workPhoneNumber: homePhoneNumber: +001122334455 mobilePhoneNumber: otherPhoneNumber: faxNumber: secretQuestions: [] signInLocked: false signInFailureCount: 0 signInTotalFailureCount: 0 signInLastFailureDate: <null> resetPasswordFailureCount: 0 resetPasswordTotalFailureCount: 0 resetPasswordLastFailureDate: <null> recipientInclusionList: recipientExclusionList: allowSMTPInput: false lastPasswordResetDate: 2019-08-20 15:06:00.856 passwordExpires: true forcePasswordReset: false externalUser: false lastSignInUserName: joe.blogs@someplace.com lastSignInDomain: activationCode: expiryDate: <null> expiredOn: <null> lastActivityDate: 2020-10-01 19:07:12.088 autoUnlockCount: 0 manualUnlockRequired: false selfRegIPAddress: 192.168.0.1 senderRoleExpired: false externalUser: false channelType: Web ipAddress: 10.1.1.1       The first line is the current date (i.e. 2020-10-02 17:01:32,360 INFO: ) and this would used for my indexed time.  Between this user event and the next user event, the log is interspersed with the following garbage 2020-10-02 16:59:36,409 ERROR: Mail.send(): (Task ID: x4) Error while sending message: javax.mail.SendFailedException: Invalid Addresses; nested exception is: com.sun.mail.smtp.SMTPAddressFailedException: 501 5.1.3 Invalid address javax.mail.SendFailedException: Invalid Addresses; nested exception is: com.sun.mail.smtp.SMTPAddressFailedException: 501 5.1.3 Invalid address at com.sun.mail.rcptTo(SMTPTransport.java:1862) at com.sun.mail.sendMessage(SMTPTransport.java:1118) at com.neesh.util.Mail.send(Unknown Source) at com.neesh.fds.util.EmailHelper.sendEmail(Unknown Source) at com.neesh.fds.core.MailSenderProcess.sendEmail(Unknown Source) at com.neesh.fds.core.MailSenderProcess.executeHelper(Unknown Source) at com.neesh.fds.core.AbstractFDSProcess.execute(Unknown Source) at com.neesh.fds.core.AbstractFDSProcess.startup(Unknown Source) at com.neesh.fds.core.MailSenderProcess.startup(Unknown Source) at com.neesh.fds.core.FDSProcessThread.run(Unknown Source) Caused by: com.sun.mail.smtp.SMTPAddressFailedException: 501 5.1.3 Invalid address at com.sun.mail.smtp.SMTPTransport.rcptTo(SMTPTransport.java:1715) ... 9 more 2020-10-02 16:59:36,409 WARN: Mail.send(): (Task ID: x4) Exiting send() with error code: -2 2020-10-02 16:59:36,409 ERROR: MailSenderProcess.executeHelper(): Invalid Addresses   I started with adding data in and then using the Advanced configuration to try and break this up starting with BREAK_ONLY_BEFORE_DATE set as true and this starts to break the log but then (as expected) breaks at every date.  So the log then breaks up at every field that has a date (e.g. lastSignInDate, dateCreated, etc.).  The problem here is that the timestamp then gets impacted as it will read the time properly and my indexing for that specific break with be all over the place instead of the first time (i.e. 2020-10-02 17:01:32) What I would like to do is capture everything between "2020-10-02 17:01:32,360 INFO:" and "ipAddress: 10.1.1.1" (using the example above).     The log is a rolling log so it is constantly being written to.  I would also like to get rid of the garbage but have not tried doing NULLs to remove events before ingest.     There is no recognised sourcetype nor does the product have any TA's in SPLUNK Base so I am trying to effectively create a new TA for this data source.   Thankyou for any assistance.  
Hi, Currently, my company has 2 sites (let's say Site A and Site B), and each of them have their own Splunk Enterprise collecting logs specifically for their environment. However, Site A has Enterp... See more...
Hi, Currently, my company has 2 sites (let's say Site A and Site B), and each of them have their own Splunk Enterprise collecting logs specifically for their environment. However, Site A has Enterprise Security app/license but not Site B. I am wondering if it is possible to "share" this ES functionality from Site A for Site B and have a centralized ES covering Site A and B?
Hi all, I have been trying to use a where command but I'm stuck because of the double quotes that I can't escape. My command is this     | where match(content_body,"\"https://.*".recipient.".*\"... See more...
Hi all, I have been trying to use a where command but I'm stuck because of the double quotes that I can't escape. My command is this     | where match(content_body,"\"https://.*".recipient.".*\"")     I have the feeling that this isn't the right way to do it, I got no results but I'm almost sure there are. When I change it to     | where match(content_body,"<https://.*".recipient.".*>")     I get the other results that I want. So I think it is only the escaping that doesn't work accordingly. Can someone confirm if I am right or not? Thank you Sasquatchatmars
Hello to everyone, i'm trying to figure out how to discard metrics before they are indexed. Unfortunately the source of the data cannot be configured to choose the right metrics, so We receive tons ... See more...
Hello to everyone, i'm trying to figure out how to discard metrics before they are indexed. Unfortunately the source of the data cannot be configured to choose the right metrics, so We receive tons of unwanted metrics every 10 seconds. For the events it is possible via props.conf and transforms.conf but I'm not able to find a way for the metrics. I also checked the metric rollup policy however seems that acts on the indexed data (if I understood). Anyone has faced this issue before ?   thanks, G.
Hi, I have requirement where I have to read data from an email in outlook and index it in splunk. Every week after deployment work we get email. I want to read the date field present in those email... See more...
Hi, I have requirement where I have to read data from an email in outlook and index it in splunk. Every week after deployment work we get email. I want to read the date field present in those email and index it in Splunk. The data is directly in email and not in any attachment form. Is there a way where i can read my data from outlook email every week once and index it in splunk? Thank you
Hi all, I am trying to present data for a specific month and breaking it down by the day.    Using my splunk  search, I am able to perform the following: Evaluate the value based on 2 fields ... See more...
Hi all, I am trying to present data for a specific month and breaking it down by the day.    Using my splunk  search, I am able to perform the following: Evaluate the value based on 2 fields field1 field2 VALUE X1 X1-A 10 X1 X1-B 20 X2 X2-A 30 X2 X2-B 10 X3 X3-A 50 X3 X3-B 30     Sum the values based on field 1 field1 VALUE X1 30 X2 40 X3 80   However, I can only present this data based on the time picker (i.e. specific day/month) I have tried timechart but was not able to show any results       ... |stats latest(A) as A, earliest(A) as B by field2, field1 |eval C=A-B |stats sum(D) as E by field1 |timechart span=1d values(D)         My goal is to present data by breaking it down into days of a month  e.g. Set timepicker to specific month; field1 1st day of month 2nd 3rd ..... last day of month total X1 3 1 10 ... 10 30 X2 10 20 5 ... 2 40 X3 20 15 10 ... 20 80 ...             Xn (~30)               How can I present the data in this way (i.e a calender view by month)? Is there another method to do so without using the timepicker?  Thanks.
We have two options to send  our Splunk Cloud, Please suggest which option is best . 1) HF outputs syslog to LogStash and logstash pushes to HEC. arcsight -> HF -> logstash -> HEC 2. Arcsight push... See more...
We have two options to send  our Splunk Cloud, Please suggest which option is best . 1) HF outputs syslog to LogStash and logstash pushes to HEC. arcsight -> HF -> logstash -> HEC 2. Arcsight pushes to Nifi and nifi transforms and pushes to HEC arcsight -> Nifi -> HEC
Hi, I have heavy forwarder in my domain and Indexer in in some hybrid cloud environment. I want to  move parsed data from heavy forwarder to indexer. Now I want to know minimum data pipeline required... See more...
Hi, I have heavy forwarder in my domain and Indexer in in some hybrid cloud environment. I want to  move parsed data from heavy forwarder to indexer. Now I want to know minimum data pipeline required between heavy forwarder and indexer. so that I can ask my network team to provide that much amount. Thanks, 
Hi All, I am populating the summary index from yesterdays data via tstats count on a Data model and inspite of adding the addTime=t, the query is updating the time from _raw i.e. today instead of yes... See more...
Hi All, I am populating the summary index from yesterdays data via tstats count on a Data model and inspite of adding the addTime=t, the query is updating the time from _raw i.e. today instead of yesterdays date.  My query is as below : | tstats count AS "Trip Count" FROM datamodel=ourdatamodel where   Condition | Collect index=xyz sourcetype=abc addTime=T      (time parameter given in the scheduled report is -1d@d till @d)  When I tried extracting the info_min_time to check whether this parameter is available through (The documentation states the collect command will look for info_min_time and if it is not present, then it will look for _time) | tstats count AS "Trip Count", earliest(info_min_time) FROM datamodel=ourdatamodel ....... I am not getting any value for this field in the result.   Can you please suggest why inspite of adding addTime=T, we are not getting date as yesterday's date ?  Thanks AG.       
Team,  I am planning to setup new Splunk Infra with 7.1 on a linux host with clustered SH & clustered Indexer. Is there any flow needs to be followed for installation like first SH, second Indexer, ... See more...
Team,  I am planning to setup new Splunk Infra with 7.1 on a linux host with clustered SH & clustered Indexer. Is there any flow needs to be followed for installation like first SH, second Indexer, third DM,LM? can someone help for the installation and configuration management for clustering.
Team, how to remotely execute a search and download the search results and store in a shared drive or a CSV file.
Hi  I am new to Splunk , it seems the Cloudtrail Alert are not working. Need some help how to fix the issue    Thanks