All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can someone help me in understanding the actual use of base and post process searches please. And I would also like to know if streamstats and eventstats will be recommended as transforming command... See more...
Can someone help me in understanding the actual use of base and post process searches please. And I would also like to know if streamstats and eventstats will be recommended as transforming commands in base searches and will there be any performance issue in using them
Hello fellow splunker, i have a problem with syslog messages from a juniper SRX firewall in a different timezone. I will give a lot of details below "like always" but at first let me give you an s... See more...
Hello fellow splunker, i have a problem with syslog messages from a juniper SRX firewall in a different timezone. I will give a lot of details below "like always" but at first let me give you an short overview of the situation. The raw Syslog Message from a Firewall in Tokyo when send to my syslog-ng (intermediate forwarder) in germany looks like this: <14>May 8 11:44:44 juniperfw01 RT_FLOW: RT_FLOW_SESSION_CREATE: session created 10.8.5.63/16744->10.0.22.222/30013 0x0 None 10.8.5.63/16744->10.0.22.222/30013 0x0 N/A N/A N/A N/A 6 KAPFW-Tmp-KP01 FDMZ WAN 76060 N/A(N/A) reth0.0 UNKNOWN UNKNOWN UNKNOWN N/A N/A You will see there is a timezone information missing per default. Juniper.. Syslog-ng now manipulates the message header (Host, Timestamp) depending on the config. After that the syslog message gets written to the disk where a universal forwarder collects the logfile and sends it to my splunk indexer. The Message will appear in Splunk like this (different flow but its the header which is important here): May 11 10:29:00 juniperfw01.contoso.com junos-alg: RT_ALG_WRN_CFG_NEED: MSRPC ALG detected packet from 10.46.0.11/53461 which need extra policy config with UUID:12345678-1234-abcd-ef00-01234567cffb or 'junos-ts-apc-any' to let it pass-through on ASL session What i really whant is what the syslog collecter integrated in splunk is doing per default (Adding a second header): May 8 11:44:40 juniperfw01.contoso.com May 8 18:44:40 juniperfw01 RT_FLOW: RT_FLOW_SESSION_DENY: session denied 10.46.30.21/54085->61.205.120.130/123 junos-ntp 17(0) DEFAULT-DENY-LOG(global) FRONT WAN UNKNOWN UNKNOWN N/A(N/A) reth3.0 UNKNOWN policy deny A Splunk Syslog collector has a few options which are very nice: 1. If an event timestamp does not include a recognizable timezone, the Splunk platform uses the time zone of the host that indexes the event. 2. For example, if an event with field timestamp="2016-01-26 21:58:38.107" is indexed by a Splunk platform instance in GMT-8, it will subtract 8 hours from the timestamp and set the _time of event to: 2016-01-26T13:58:38.000-08:00 3. The Splunk platform normally looks at the text of the event for the timestamp, and by default will select the leftmost recognizable timestamp. I'm sure there are multiple ways to accomplish this. But my solution needs a few necessary things: 1. I need the original timestamp included (like done by splunk) - this is necessary because the index timestamp and the event time are slightly different. This would make a Flow Tracing to be impossibile. 2. The traffic must be collected from the intermediate forwarder first. I would love to accomplish this by modify the syslog-ng config which would add another header but i did not find anything about that. Details about my config: +-----------------+ +-------------------------------+ +-----------------------+ |Syslog Originator|---->----|intermediate forwarder udp:5517|---->----|Splunk Indexer tcp:9997| +-----------------+ +-------------------------------+ +-----------------------+ Intermediate Forwarder: syslog-ng conf: options { log_fifo_size(1024); time_reopen (10); chain_hostnames(no); keep_hostname(yes); use_dns(yes); use_fqdn(yes); create_dirs (yes); group(syslog-ng); dir_perm(0750); perm(0640); }; source s_udp5517 { udp( port(5517) keep-hostname(no) use-dns(yes) ); }; destination d_juniper { file( "/data/syslog/forwarder/u5517/$HOST/$YEAR$MONTH$DAY-juniper.log" ); }; # Log log { source(s_udp5517); destination(d_juniper); }; And the splunk inputs.conf from my universal forwarder running on this intermediate forwarder: # Juniper [monitor:///data/syslog/forwarder/u5517/*/*juniper.log] sourcetype = juniper index = juniper disabled = 0 host_segment = 5
Hi Team, I have javascript source code from github (https://github.com/bramp/js-sequence-diagrams) How to use this in my dashboard to render the output... what changes need to be made??
After upgrade to 7.3.3, drilldown option for link to search is not working as expected, sometimes it shows page not found, sometimes incomplete query is executed. I tried decoding the query and ... See more...
After upgrade to 7.3.3, drilldown option for link to search is not working as expected, sometimes it shows page not found, sometimes incomplete query is executed. I tried decoding the query and add it, but then also its not working properly. Any idea what the issue here ?
Hi, I Have sometimes the error Network is unreachable while sending mail to: on the log file: /opt/splunk/var/log/splunk/python.log when sending email with an automatic alert . With the comman... See more...
Hi, I Have sometimes the error Network is unreachable while sending mail to: on the log file: /opt/splunk/var/log/splunk/python.log when sending email with an automatic alert . With the command : index=main | head 5 | sendemail to=email@server.com subject="Here is an email notification" message="This is an example message" sendresults=true inline=true format=raw s ==> No problem, I have always received the email. So I don't understand why the email is sometimes send and sometimes not with the automatic alert! Some informations: Splunk version: 7.0.3 I use gmail SMTP to sending email (smtp.gmail.com:587) Thank you in advance for your answer
Hi, I want to group few events based on the success and failure action for a particular user and dest as below. Kindly help in writing a query like this. Using streamstats I got things like bel... See more...
Hi, I want to group few events based on the success and failure action for a particular user and dest as below. Kindly help in writing a query like this. Using streamstats I got things like below. Query which I have used here index=wineventlog_sec* tag=authentication (action=success OR action=failure) | table _time user dest EventCode action | sort 0 _time user dest | streamstats count as attempts by action user dest reset_on_change=true Time User Dest action attempts T1 U1 D1 success 1 T2 U1 D1 success 2 T3 U1 D1 failure 1 T4 U1 D1 failure 2 T5 U1 D1 failure 3 T6 U1 D1 success 1 T7 U1 D1 success 2 T8 U1 D1 success 3 How to get the max attempts performed for a particular group of user dest and action (all 3 should be present) like below. Time User Dest action attempts max_attempts T1 U1 D1 success 1 2 T2 U1 D1 success 2 2 T3 U1 D1 failure 1 3 T4 U1 D1 failure 2 3 T5 U1 D1 failure 3 3 T6 U1 D1 success 1 3 T7 U1 D1 success 2 3 T8 U1 D1 success 3 3
Hi all, During evaluating round I got the error: | stats avg(duration) AS "booking average time" by hours | eval "booking average time"=round("booking average time",2) Error in 'eval' comm... See more...
Hi all, During evaluating round I got the error: | stats avg(duration) AS "booking average time" by hours | eval "booking average time"=round("booking average time",2) Error in 'eval' command: The arguments to the 'round' function are invalid. Any ideas? Using inline search gives me the same result. thanks Szymon
I import csv files structure like following A Last Login Region Disable abc@abc.com 3/23 18:00 HK No tbc@tbc.com NULL USA ... See more...
I import csv files structure like following A Last Login Region Disable abc@abc.com 3/23 18:00 HK No tbc@tbc.com NULL USA Yes I would like to make 1 chart with the following 1. Last Login != NULL AND Disable != Yes Timechat as Region 2. Last Login = NULL AND Disable != NO Timechat as Region index=* source="/u01/testing.csv" sourcetype="365csv" | where "Last Login"!=NULL AND "Disable!="YES" | stats count by region
Hi, I have created new 15 day trial account. However i haven't received the welcome email. And i'm not able to create controller. Can any one please help me out in this?
Hi, I am getting below error while trying to create a connection Database connection is invalid
I know how to mask data at indexing time using EVAL and SEDCMD. But there are more logics I need to consider. Can I mask data using python script at indexing time or is there any method like that... See more...
I know how to mask data at indexing time using EVAL and SEDCMD. But there are more logics I need to consider. Can I mask data using python script at indexing time or is there any method like that or can I create an eval function using a python script?
I want to grey out or blur of some panel based on the single value result. If the single value result is '0' ,It should show in grey out or in blur (Not to hide) the panel. Please help. Thank... See more...
I want to grey out or blur of some panel based on the single value result. If the single value result is '0' ,It should show in grey out or in blur (Not to hide) the panel. Please help. Thanks In Advance.
I have a dashboard with 4 drop down where user can select a specific value from a dropdown. Also when one drop down is selected, the other dropdowns refresh so it only displays the list based on othe... See more...
I have a dashboard with 4 drop down where user can select a specific value from a dropdown. Also when one drop down is selected, the other dropdowns refresh so it only displays the list based on other field for user to further select. As shown in below diagram, when each field has 'All', the number of values in field4 are high; however when user selects a specific value in field3, eg: pavanml, there are only 2 values displayed in field4. In field4 the value is 'All' but effectively there are only 2 values for user to select. Now in the search query of the panel, the index has only field4. And when we apply a filter of nums=$fieldSelection$ the value used is *, and hence does not serve any purpose and it takes lot of time. How should I modify in order to effectively have nums IN ("4812","7746") even though field4 has 'All' selected, but list of values are only these 2 based on selection of field3 by user. Also field4 is actually of string type even though value is a number.
HI Experts, I am sure someone must have faced this issue with Web Analytics 2.2.1 Splunk 7.2.5 (SHC, Index Cluster as I have seen similar posts but no concrete Answer . I am aware that whe... See more...
HI Experts, I am sure someone must have faced this issue with Web Analytics 2.2.1 Splunk 7.2.5 (SHC, Index Cluster as I have seen similar posts but no concrete Answer . I am aware that when we install this app we will get Web data model.Because i already have CIM so I have cloned this data model and named it "Web Analytics" . I can see the data as it is properly tagged (tag=web), this gives me the results as expected. I have accelerated this data model for summary range 1 day. Till 99.98% it completed in 30 min but then it stuck for like 48 hours now , as shown below . I tried rebuilding it still same status MODEL Datasets 3 Events, 1 Search Event Edit Permissions Shared Globally. Owned by nobody. Edit ACCELERATION Rebuild Update Edit Status 99.98% Completed Access Count 0. Last Access: - Size on Disk 1.91 GB Summary Range 86400 second(s) Buckets 1006 Updated 5/10/20 7:04:00.000 PM" I do not know if because of this dashboards like (Analytics Center,Audience,Acquisition,Behavior Overview) are blank . Couple of other observations Below query does not produce any results as Web.eventtype=pageview produce no results | tstats summariesonly=t prestats=t dc(Web.http_session) FROM datamodel=`datamodel` WHERE Web.site="*" Web.eventtype=pageview GROUPBY Web.http_session,Web.ua_mobile _time span=1d | timechart span=1d dc(Web.http_session) by Web.ua_mobile | rename Web.ua_mobile AS "Mobile Device " | fields - VALUE I have also checked Document link in the app itself and can see below Web Server Log Data check (tag=web | head 5) check completed Website Configuration check --table is showing results Lookup check --(Sessions,Pages) ---check completed Data Model Acceleration check --98.99% Any help will be highly appreciated. | tstats summariesonly=t prestats=t count FROM datamodel=WebAnalytics --> Produce no Result | tstats count FROM datamodel=WebAnalytics --> Produced results VG
Hi Guru, I would like to show my data with 4000 x 3000 matrix. I used chart command but the limit for the number of columns is 100. chart count by col1, col2 limit=0 didn't work. It shows only ... See more...
Hi Guru, I would like to show my data with 4000 x 3000 matrix. I used chart command but the limit for the number of columns is 100. chart count by col1, col2 limit=0 didn't work. It shows only 100 columns. I adjusted jschart_series_limit at web.conf to 10000000, but it didn't work. anybody knows how to remove the limit?
Hello, I am facing issue in dboutput command , I try to push data from dashboard tables data to database but no data is going.
in a line chart after reaching a threshold it needs to show in different color how is it ??
Hi Everyone, How to use glowing/flashing button to show the status as connect or disconnect if connect its show in green and if its disconnects are shown in red. i am using CSV file (raj100) fo... See more...
Hi Everyone, How to use glowing/flashing button to show the status as connect or disconnect if connect its show in green and if its disconnects are shown in red. i am using CSV file (raj100) for staus info. i want to show this glowing/flashing button in the status field like the 1st screenshot. i am using below XML code. <dashboard script="panel_tooltip.js" theme="dark"> <label>test</label> <row> <html> <a class="button" href="#">Connected</a> <style> <body> { background: black; } .button { background-color: #004A7F; -webkit-border-radius: 10px; border-radius: 10px; border: none; color: #FFFFFF; cursor: pointer; display: inline-block; font-family: Arial; font-size: 20px; padding: 5px 10px; text-align: center; text-decoration: none; -webkit-animation: glowing 1500ms infinite; -moz-animation: glowing 1500ms infinite; -o-animation: glowing 1500ms infinite; animation: glowing 1500ms infinite; } @-webkit-keyframes glowing { 0% { background-color: #63f707; -webkit-box-shadow: 0 0 3px #B20000; } 50% { background-color: #469416; -webkit-box-shadow: 0 0 40px #FF0000; } 100% { background-color: #63f707; -webkit-box-shadow: 0 0 3px #B20000; } } </style> </body> </html> <panel> <title>Status</title> <table> <search> <query>| inputlookup raj100|table ApName "Area CP Name" CLevel Date | eval CLevel=if(like(ApName,"%CCP%"), "Connected", "disconnected") |table ApName "Area CP Name" CLevel</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </dashboard> CodePen Home
Recently I have archived buckets of _internal index(older than 90 days) from one site of splunk indexers to Hadoop cluster using https://docs.splunk.com/Documentation/Splunk/8.0.3/Indexer/Archivingin... See more...
Recently I have archived buckets of _internal index(older than 90 days) from one site of splunk indexers to Hadoop cluster using https://docs.splunk.com/Documentation/Splunk/8.0.3/Indexer/ArchivingindexestoHadoop. I see buckets copied to Hadoop cluster and I am able to view events from archived index. But my challenge here is I see more buckets count in Hadoop cluster than in the splunk indexers from the dashboards Settings->virtual indexes -> archived indexes-> View dashboards I used SPL query "dbinspect index=_internal |stats count by splunk_server |addcoltotals" with the time range older than 90 days. Please help me in knowing what went wrong in my above approach or share the exact query to get the comparison of buckets count between archived index and splunk index
I need to alter data in splunk using props.conf I need to use external_cmd to run python script Can you give me a example python script for that. Thanks