All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am after a way of testing connectivity and reliability between my Search Heads and Indexers in a cluster as I am seeing some errors in the remote searches log for scheduled searches that ar... See more...
Hello, I am after a way of testing connectivity and reliability between my Search Heads and Indexers in a cluster as I am seeing some errors in the remote searches log for scheduled searches that are showing as some nodes 'does not exist' errors.
I would have to move my custom Correlation rules  to a custom TA-foo app My correlation searches comprises of: custom rules created from scratch (all across the apps estate - yup, its a mess) and ... See more...
I would have to move my custom Correlation rules  to a custom TA-foo app My correlation searches comprises of: custom rules created from scratch (all across the apps estate - yup, its a mess) and a few of the OOB CRs from the DA-ESS-, SA-, TA-, Splunk_SA_, Splunk_TA_, and Splunk_DA-ESS_  apps that were modified as per my requirement Are there any best practices/recommendations that i need to consider other than   Add import = TA-foo in local.meta in <Splunk_HOME>/etc/apps/SplunkEnterpriseSecuritySuite/metadata add request.ui_dispatch_app = SplunkEnterpriseSecuritySuite in savedsearches.conf for each of the Correlation searches that i migrate PS: I will also migrate the dependant KOs (macros/lookups etc) in a similar fashion to the TA-foo add on. Is there any other better way to go about it, just to be future safe for upgrades, so that i have a single touchpoint rather than running after optimisations in each app after any activity such as a version upgrade . Splunk version 7.3.0 ES version 5.3.1
I am parsing SFTP logs of file downloads and want to count how many bytes a specific user downloaded at what time. The logs look like this (I am abbreviating the standard rfc5424 syslog prefix):   ... See more...
I am parsing SFTP logs of file downloads and want to count how many bytes a specific user downloaded at what time. The logs look like this (I am abbreviating the standard rfc5424 syslog prefix):   session opened for local user XXX from [10.#.#.#] received client version # open "/some/file/name" flags READ mode 0666 close "/some/file/name" bytes read ### written # open "/some/other/file/name" flags READ mode 0666 close "/some/other/file/name" bytes read ### written # open "/and/another/filename" flags READ mode 0666 close "/and/another/filename" bytes read ### written # session closed for local user XXX from [10.#.#.#]   I want to somehow show how many bytes a specific user downloaded at what time. I start by inline extraction of a few extra helper fields like the username and the file sizes for example:   appname=sftp-server | rex field=_raw "session (opened|closed) for local user (?<sftp_user>[^ ]+) from" | rex field=_raw "close \"(?<sftp_filename>.*)\" bytes read (?<sftp_bytes_read>\d+)"   If I wanted to see how much data was downloaded (without caring about which user) I would just do a timechart which does the trick: appname=sftp-server | rex field=_raw "session (opened|closed) for local user (?<sftp_user>[^ ]+) from" | rex field=_raw "close \"(?<sftp_filename>.*)\" bytes read (?<sftp_bytes_read>\d+)" | timechart sum(sftp_bytes_read) However the event which has the file size, does not have the user so I can filter or chart the username. If I want to filter by sftp_user, the only way I found how to do it is by making a transaction for the user session and then filtering on the sftp_user (in the example below, host, appname, and procid are extracted by the rfc5424 syslog addon):     appname=sftp-server | rex field=_raw "session (opened|closed) for local user (?<sftp_user>[^ ]+) from" | rex field=_raw "close \"(?<sftp_filename>.*)\" bytes read (?<sftp_bytes_read>\d+)" | transaction host appname procid sftp_user startswith="session opened for" endswith="session closed for"   This does give me an events for every single SFTP session by the user, but I cannot figure out how to get the details of each file download (the individual "close" lines) out of it.  What would be the way to do that, or just... "explode" it back into individual events?
Past 1 week, I'm trying to create csv using splunk add-on builder, but still i didn't figure out the solution. Anybody have the solution pls share.  Thanks Advance
I am using the smartstore function of splunk. The S3 protocol is used to tier data to ceph storage. Can Splunk SmartStore delete buckets from object storage using the S3 protocol?
Hello, In order to make syslog communication through TLS work, I followed this procedure https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates  on one node. I backe... See more...
Hello, In order to make syslog communication through TLS work, I followed this procedure https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates  on one node. I backed up the original cacert.pem and copy the new root certificate just created to $SPLUNK_HOME/etc/auth/cacert.pem I also copied the server certificate to $SPLUNK_HOME/etc/auth/server.pem and change the configuration in files $SPLUNK_HOME/etc/apps/launcher/local/inputs.conf and $SPLUNK_HOME/etc/system/local/server.conf Since then, I have the error log : ERROR LMTracker - failed to send rows, reason='Unable to connect to license master=https://xxx:8089 Error connecting: SSL not configured on client' (xxx correspond to the license master server) So I tried to restore original cacert.pem and server.pem but i still get the error. I tried to connect to the license master through TLS with curl but I get an error (Peer's Certificate has expired) I checked the license master certificate and it appears to be expired since one month. But license verification is working from other Splunk nodes (on which I did not change root certificate) and curl too. Also I am not able to renew this certificate as it is sign by the default root CA and I do not have the passphrase of the private key. The connection to the web interface of this node does not work, I get an internal server error. Could you please help me to figure out what is blocking the license verification? Do not hesitate to tell me if you need more details. Thank you
I have used CSS to increase the size of pie chart legend labels;     .highcharts-data-label text tspan { font-size:14px !important; word-break: break-all !important; }     On smaller ... See more...
I have used CSS to increase the size of pie chart legend labels;     .highcharts-data-label text tspan { font-size:14px !important; word-break: break-all !important; }     On smaller panels and with long text however the text overflows out of the panel. I've attempted to use word break to stop this from happening without any result. Please refer to the screenshot attached. Is there anyway to break and stay within the panel?
Hi splunkers, I heard some rumors that Microsoft 365 App and anything related to Microsoft Apps are planning to change their free versions to paid versions. Please let me know the truth about the a... See more...
Hi splunkers, I heard some rumors that Microsoft 365 App and anything related to Microsoft Apps are planning to change their free versions to paid versions. Please let me know the truth about the above one. Kindly please help, Best Regards.
Hello Splunk team, do you know if it's possible to do a splunk search in a python script? taking into account the selection of the desired period (last week, last month ...). for example for a simp... See more...
Hello Splunk team, do you know if it's possible to do a splunk search in a python script? taking into account the selection of the desired period (last week, last month ...). for example for a simple search : | append [search sourcetype=ESP_Histo env="Web" service="Application" apps="Web Sharepoints" fn="*" managed_entity="*- URL" sampler="SHAREPOINT - URL" ID_EVENT | eval Applications=case( apps like "%Web Sharepoints%","WEB SHAREPOINTS") | `transactions` ] | eval max_time=if(info_max_time == "+Infinity", now(), info_max_time) | eval min_time=if(info_min_time == "0.000", 1577836800, info_min_time) | eval periode1=max_time-min_time | stats sum(duration) AS durationindispo by Applications, periode1 | outputcsv append=true override_if_empty=false web_search
Hi there, i'm using Splunk Enterprise 7.0.3 and Splunk DB Connect 3.1.3. i have this Data Lab Output configuration using real time saved search to export Splunk logs to a MySQL DB and it's working ... See more...
Hi there, i'm using Splunk Enterprise 7.0.3 and Splunk DB Connect 3.1.3. i have this Data Lab Output configuration using real time saved search to export Splunk logs to a MySQL DB and it's working just fine until recently. What i'm experiencing now is the DB connect doesn't export some of the data from Splunk to DB (some of it is missing from the DB). The issue is resolved when i'm using scheduled search with cron schedule but i'd like to use real time saved search again if possible. I hope i can get a solution. Thanks in advance.   EDIT: i forgot to tell you that the Data Lab Output actually stopped exporting from Splunk Log at all approximately 2 weeks ago and i changed the real time saved search timerange from all time real time to all time 15 minute window and then it works again but the issue i explained in the beginning happens.
Hi everyone,   I am trying to remove partial duplicate in the same field, but couldn't find a solution yet. For instance I have for the same field these values: http://www.g http://www.go http:... See more...
Hi everyone,   I am trying to remove partial duplicate in the same field, but couldn't find a solution yet. For instance I have for the same field these values: http://www.g http://www.go http://www.google.com   I would like to only keep the value (http://www.google.com), I tried to use dedup and mvfilter: eval url_in_parameter=mvfilter(!url_in_parameter LIKE url_in_parameter*) I am still a begginer on Splunk and couldn't find any similar topics on internet.   Thanks for your help and have a good day.
I have two fields skill1 and skill2 skill2:             skill1:           Both these queries are producing results:   timechart span=1d count by skill1 timechart span=1d coun... See more...
I have two fields skill1 and skill2 skill2:             skill1:           Both these queries are producing results:   timechart span=1d count by skill1 timechart span=1d count by skill2   I want to create a separate variable skill which contains difference of skill1's values and skill2's values and create a timechart out of it. I tried doing:   timechart span=1d count by skill1-skill2   But it's not working. Any help would be appreciated!
Our Splunk deployment is on-prem and we have automated deployment based on app updates. Users create a PR, make updates and push to bitbucket. Automated tests are done and if everything is OK the cha... See more...
Our Splunk deployment is on-prem and we have automated deployment based on app updates. Users create a PR, make updates and push to bitbucket. Automated tests are done and if everything is OK the changes are in production in the hour. We are considering the move to Splunk Cloud but are worried about the manual app vetting process. That process seems cumbersome and might mean that the 5-10 changes across multiple apps we have each week will form an endless backlog. How are you finding the app vetting process? Is it working OK or are you finding it painful? This is a question aimed at large corporates who have moved from on-prem or BYOL to Splunk cloud.
Hi , i have 2 queries . (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) as Blocked by Local... See more...
Hi , i have 2 queries . (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) as Blocked by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total" | append [search (index=abc  OR index=def) blocked =0 | rex field=index "(?<Local_Market>\w.*?)_" | stats count as Detected by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total"] | stats values(*) as * by Local_Market | transpose 0 header_field=Local_Market column_name=Local_Market | addinfo | eval date=info_min_time | fieldformat date=strftime(date,"%m-%d-%Y") | fields - info_* 1> above query is giving me the correct date time ..but when i am scheduling this as report its coming  as Epoch time in csv file .. 2>  also how can we get date field in first column instead of last column without disturbing the other fields .. thanks in advance
The internal metrics logs has information on tcp in and out. This is useful to see the traffic profile between Splunk tiers. We planning our migration to the cloud and this will help with sizing the ... See more...
The internal metrics logs has information on tcp in and out. This is useful to see the traffic profile between Splunk tiers. We planning our migration to the cloud and this will help with sizing the cloud pipe. Has anyone created dashboards with this information they could share with us?
Hi All, My customer want to collect mail transmission/reception logs using the "Microsoft O365 Email Add-on for Splunk" App. But I have no data received and getting error as below.(I attached the f... See more...
Hi All, My customer want to collect mail transmission/reception logs using the "Microsoft O365 Email Add-on for Splunk" App. But I have no data received and getting error as below.(I attached the full error logs) ----------------------------- 2021-09-15 11:03:56,355 ERROR pid=50226 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/ta_microsoft_o365_email_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/o365_email.py", line 136, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/input_module_o365_email.py", line 182, in collect_events messages.append(messages_response['value']) KeyError: 'value' ----------------------------- Thank you in advance. Best regards, Bob Hwang
Hello splunk  I want to create a v  model structure  in dashboard I am not aware of it can you please help me with the it.I am posting a picture  below. How I want to create dashboard. Th... See more...
Hello splunk  I want to create a v  model structure  in dashboard I am not aware of it can you please help me with the it.I am posting a picture  below. How I want to create dashboard. Thanks in advance Renuka  
Hello All. In indexer clustering , one peer is not searchable  and status is down . What is the process to fix it please anyone help me  this is the big issue we are facing .
When you get an incident in splunk-ES, the notable is often populated with 'additional fields'. some of these custom, some out of the box. Im looking to see what fields would be displayed for a notab... See more...
When you get an incident in splunk-ES, the notable is often populated with 'additional fields'. some of these custom, some out of the box. Im looking to see what fields would be displayed for a notable from either searching the notable macro or the api if need be. searching the notable macro, I often i get 100+ fields for a notable, but maybe only 15 are displayed in the notable itself, where some other notable may only have 5 displayed. Is there a way to do a search that indicates wich fields would be displayed in the 'additional fields' of the notable? for reference the additional fields im talking about are mentioned here under 'Add a field to the notable event details': https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Customizenotables
Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on h... See more...
Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on hourly updates for this, but that part is much less of a problem. I do not have access to summary indexing here, so don't bother mentioning it! It so happens that the admins of this site have enabled tsidx reduction so I'm seeing this warning:   Search on most recent data has completed. Expect slower search speeds as we search the reduced buckets.   Not in itself a huge problem as in the search box I eventually see all the available results. HOWEVER... When I run this search into outputlookup, it doesn't seem to wait for all the results to render and goes off and writes the lookup table without (very annoyingly) waiting for one particular column to appear. Is this a bug? Has anyone else seen this? Any way around it? It seems to be entirely consistent and I want to stop hammering the splunk instance with this very expensive search! Cheers, Charles