All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

@chrisyounger  I really like your visualisations and the level of detail you put into the their configurability, it makes them so useful. So, a couple of things about the dendrogram app In the den... See more...
@chrisyounger  I really like your visualisations and the level of detail you put into the their configurability, it makes them so useful. So, a couple of things about the dendrogram app In the dendrogram docs, the token names are wrong - it says it creates tokens  $dendrogram_viz-path$, $dendrogram_viz-name$ and $dendrogram_viz-drilldown$ but the hyphens are actually underscores in the created tokens. Secondly, have you thought about supporting collapsible nodes with or without drilldown. Along the lines of https://observablehq.com/@d3/collapsible-tree or perhaps similar to the click support in the sunburst, where you support view of N layers, so as you expand a node, the node above can auto collapse. It would be really handy, as at the moment, I guess the only way is to rerun the search and only draw the paths you need. Other enhancements would be to support node size per node, or perhaps based on path depth.  
We are using splunk 5.25 Android SDK in our mobile app. We can register Errors and transactions and can see them splunk Management dashboard  but are not able to see events. https://mint.splunk.com/... See more...
We are using splunk 5.25 Android SDK in our mobile app. We can register Errors and transactions and can see them splunk Management dashboard  but are not able to see events. https://mint.splunk.com/dashboard   I have followed the Splunk docs and trying to log the the Event as follows.      Mint.logEvent("Home pressed")   Please Let us know why Events are not displayed in the splunk dashboard.  
Hello everyone   I have a problem  with spluk enterprise(8.0.2) Centos 07   I have 3 log source (all  forward logs for 514) in the picture, we can see data input with diferent port includ 51... See more...
Hello everyone   I have a problem  with spluk enterprise(8.0.2) Centos 07   I have 3 log source (all  forward logs for 514) in the picture, we can see data input with diferent port includ 514 and  que black screem  we can see that  the port 514 is not listeing   Please help me        
I have a table which generates hosts with Event code 52 in last 24 hours, however my requirement is to highlight hosts which are repetitive eg:- hosts which are generating event code 52 from last 2,3... See more...
I have a table which generates hosts with Event code 52 in last 24 hours, however my requirement is to highlight hosts which are repetitive eg:- hosts which are generating event code 52 from last 2,3 days or 1 week and also can I get a count of days from which host is generating event code 52
Hi , How do I fetch the raw logs for the source type :wms_oracle_sessions? Query: index=main sourcetype=wms_oracle_sessions | bucket span=5m _time | stats count AS sessions by _time,warehouse,mach... See more...
Hi , How do I fetch the raw logs for the source type :wms_oracle_sessions? Query: index=main sourcetype=wms_oracle_sessions | bucket span=5m _time | stats count AS sessions by _time,warehouse,machine,program | search warehouse=ew | stats sum(sessions) AS psessions by _time,program | timechart avg(psessions) by program Thank you very much. Regards, Rahul
We use Splunk Bluecoat-TA but many fields are missing.    They have not changed log format.  But it seems they changed log format.    The sample log is    Sep 18 15:25:44 2020-09-18 07:25:41 4115 ... See more...
We use Splunk Bluecoat-TA but many fields are missing.    They have not changed log format.  But it seems they changed log format.    The sample log is    Sep 18 15:25:44 2020-09-18 07:25:41 4115 10.X.X.X 200 TCP_TUNNELED 6569 1787 CONNECT tcp sy.abc.net 443 / - abc2323 LOCAL1\ACC124 - 172.X.X.X - - “Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36” OBSERVED "MyShopWhitelist;LivePe;ProjectA_URL;docsApproveURLs;ABC_GBB_GGG;Business/Econony” - 172.X.X.X 0#015
  I have a dashboard search which ends with a timechart like this   | eval VUser=if(isnotnull(Stop_time),0,VUser) | timechart count(VUser) by Protocol   The event with the VUser field is only p... See more...
  I have a dashboard search which ends with a timechart like this   | eval VUser=if(isnotnull(Stop_time),0,VUser) | timechart count(VUser) by Protocol   The event with the VUser field is only present for one time interval of the timechart series so I want do the equivalent of a filldown until Stop_time is not null and then reset the VUser count. Filldown only works when there are nulls. In the above example when there are no values for VUser timechart generates a zero value rather than a null which is why filldown is no good.   What else can I do in this case?    
The events have fields like below: description, code AAxxxxx, 200 AAxxxx,301 AAxxxx,401 BBxxxx,200 BBxxxx,303 AAxxx, 502   I want to filer(do not display) events wih below conditon:    keyw... See more...
The events have fields like below: description, code AAxxxxx, 200 AAxxxx,301 AAxxxx,401 BBxxxx,200 BBxxxx,303 AAxxx, 502   I want to filer(do not display) events wih below conditon:    keyword "AA" is in 'description'  with code=[345]\d{2} I tried below SPL but not working as I expected. base search | NOT (search description="*AA*" AND regex code="[345]\d{2}") Could you guys provide me some suggestions?      
Hi,  I am trying to extract name of the individuals from the field that I have in the data. For example from the data below, I want to extract Jack Smith and Joe Shmoe.  Any suggestions on how I ca... See more...
Hi,  I am trying to extract name of the individuals from the field that I have in the data. For example from the data below, I want to extract Jack Smith and Joe Shmoe.  Any suggestions on how I can do this ?  Some Text Some Text 24-Jul-2020 10:52:41 - Jack Smith (Approval history) Jack Smith approved INT128302 for group **CAB - DEV Tech Some Text Some Text 22-Jul-2020 12:56:37 - Joe Shmoe (Approval history) Joe Shmoe approved INT128302 for group **Dev - DBA Tech group Some Text Some Text   Thanks !  Rohan 
I have a correct working query but for some reason splunk doesn't return the results and shows no event sampling as a result. It works fine after I extract the SQL and format it in SQL editor first a... See more...
I have a correct working query but for some reason splunk doesn't return the results and shows no event sampling as a result. It works fine after I extract the SQL and format it in SQL editor first and then copy paste the formatted sql code to splunk and rerun. Not sure if it's something to do with new line or space in the query. Has anyone else experienced similar issue and how did you 
Dear All expert ~ we have some data that every 5 minutes generated.  and we want to predict it , we need to use the season local level algorithm. cause our data period is 1 week. therefore ,  12 ... See more...
Dear All expert ~ we have some data that every 5 minutes generated.  and we want to predict it , we need to use the season local level algorithm. cause our data period is 1 week. therefore ,  12 points in one hour , 288 points in one day , 2016 points in 1 week   when  I try to predict it , some error occur ...   ======================================================================== my source code.. index=traffic TP13G |eval Timestamp = strftime(_time,"%Y/%m/%d %H:%M:%S") |table _time Timestamp Source ip Port Description BW Incoming Outgoing | eval total=Incoming+Outgoing   |timechart span=5m limit=0 avg(total) as total by equipment | fields _time TP13G | timechart span=5m values(TP13G) as CEN_TP13G | predict "CEN_TP13G" as CEN_TP13G_prediction algorithm=LLP holdback=0 future_timespan=2016 period=2016 upper95=upper95 lower95=lower95 | eval isOutlier = if(CEN_TP13G_prediction!="" AND 'CEN_TP13G_prediction' != "" AND ('CEN_TP13G_prediction' < 'lower95(CEN_TP13G_prediction)' OR 'CEN_TP13G_prediction' > 'upper95(CEN_TP13G_prediction)'), "Outlier", "0") | eval check=strftime(_time,"%Y/%m/%d %H:%M:%S") | eval check=strptime(check,"%Y/%m/%d %H:%M:%S") | where check > now()-604800 | fields - check | rename lower95(CEN_TP13G_prediction) as predict_low | rename upper95(CEN_TP13G_prediction) as predict_high | fields + _time CEN_TP13G_prediction , CEN_TP13G isOutlier | eval CEN_TP13G_prediction=round(CEN_TP13G_prediction,3)   ========================================================================== is any way to modify the period 2000 to 2016 ?? thanks for help !!!!! Will Tseng
I'm working with a client to ingest data from PCAPs into an on-prem Splunk.    I've installed the independent Stream Forwarder (v7.2.0) on the host with the PCAPs and I'm able to ingest the PCAPs.  O... See more...
I'm working with a client to ingest data from PCAPs into an on-prem Splunk.    I've installed the independent Stream Forwarder (v7.2.0) on the host with the PCAPs and I'm able to ingest the PCAPs.  One issue that the client has raised is that they would like for the generated events to include as metadata the name (including path) of the PCAP file from which the event came. I'm using the -r option to specify the name of the PCAP and I've tried including the following on the command line but StreamFwd wouldn't accept it: --_meta "source_pcap::/path/to/and/name/of/the.pcap"   Any thoughts on how this could be done?  I'm open to raising it as an "idea" if need be.
Hi All, The AppInspect checks for Splunk Apps were recently changed to verify that setup.xml is not used, instead requiring that apps follow the approach demonstrated by the canonical example at htt... See more...
Hi All, The AppInspect checks for Splunk Apps were recently changed to verify that setup.xml is not used, instead requiring that apps follow the approach demonstrated by the canonical example at https://splunkbase.splunk.com/app/3728/#/overview Unfortunately, we have noticed that the save functionality does not work in the Firefox browser (it seems to loop infinitely instead of redirecting to the main App page). Has anyone managed to make this new configuration approach work 100% correctly in Firefox? Is it possible for this example to be updated so that it works reliably across modern browsers? Thanks!
I am looking for the Splunk 7.0 RPM to upgrade from my running 6.4.1.  I am unable to find this listed in the older releases. 7.1.1 seems to be the oldest one listed on the site.  The upgrade docs ... See more...
I am looking for the Splunk 7.0 RPM to upgrade from my running 6.4.1.  I am unable to find this listed in the older releases. 7.1.1 seems to be the oldest one listed on the site.  The upgrade docs mention 6.4 to 7.0 and then to 8.0. Does anyone know if 7.0 is still downloadable? 
I am having difficulty configuring the Cb Defense Add-On for Splunk on a heavy forwarder, which is forwarding to my Splunk cloud environment. I have followed the configuration guides and I have cr... See more...
I am having difficulty configuring the Cb Defense Add-On for Splunk on a heavy forwarder, which is forwarding to my Splunk cloud environment. I have followed the configuration guides and I have created a Carbon Black API, but I see the following errors in the ta-cb_defense-carbonblack_defense_.... log file. 2020-09-23 23:44:39,961 +0000 log_level=INFO, pid=11719, tid=MainThread, file=ta_config.py, func_name=set_logging, code_line_no=77 | Set log_level=INFO 2020-09-23 23:44:39,961 +0000 log_level=INFO, pid=11719, tid=MainThread, file=ta_config.py, func_name=set_logging, code_line_no=78 | Start CarbonBlack task 2020-09-23 23:44:39,962 +0000 log_level=INFO, pid=11719, tid=MainThread, file=ta_checkpoint_manager.py, func_name=_use_cache_file, code_line_no=76 | Stanza=CarbonBlack using cached file store to create checkpoint 2020-09-23 23:44:39,962 +0000 log_level=INFO, pid=11719, tid=MainThread, file=event_writer.py, func_name=start, code_line_no=28 | Event writer started. 2020-09-23 23:44:39,962 +0000 log_level=INFO, pid=11719, tid=MainThread, file=thread_pool.py, func_name=start, code_line_no=66 | ThreadPool started. 2020-09-23 23:44:39,963 +0000 log_level=INFO, pid=11719, tid=MainThread, file=timer_queue.py, func_name=start, code_line_no=39 | TimerQueue started. 2020-09-23 23:44:39,963 +0000 log_level=INFO, pid=11719, tid=MainThread, file=ta_data_loader.py, func_name=run, code_line_no=48 | TADataLoader started. 2020-09-23 23:44:39,964 +0000 log_level=INFO, pid=11719, tid=Thread-2, file=scheduler.py, func_name=get_ready_jobs, code_line_no=100 | Get 1 ready jobs, next duration is 119.999002, and there are 1 jobs scheduling 2020-09-23 23:44:40,010 +0000 log_level=WARNING, pid=11719, tid=Thread-4, file=loader.py, func_name=_get_log_level, code_line_no=133 | [stanza_name="CarbonBlack"] The log level "" is invalid, set it to default: "INFO" 2020-09-23 23:44:40,016 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=engine.py, func_name=start, code_line_no=36 | [stanza_name="CarbonBlack"] Start to execute requests jobs. 2020-09-23 23:44:40,016 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=engine.py, func_name=run, code_line_no=219 | [stanza_name="CarbonBlack"] Start to process job 2020-09-23 23:44:40,016 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=engine.py, func_name=_get_checkpoint, code_line_no=189 | [stanza_name="CarbonBlack"] Checkpoint not specified, do not read it. 2020-09-23 23:44:40,016 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=http.py, func_name=request, code_line_no=165 | [stanza_name="CarbonBlack"] Preparing to invoke request to [https://defense-prod05.conferdeploy.net/integrati$ 2020-09-23 23:44:40,325 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=http.py, func_name=_decode_content, code_line_no=36 | [stanza_name="CarbonBlack"] Unable to find charset in response headers, set it to default "utf-8" 2020-09-23 23:44:40,325 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=http.py, func_name=_decode_content, code_line_no=39 | [stanza_name="CarbonBlack"] Decoding response content with charset=utf-8 2020-09-23 23:44:40,325 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=http.py, func_name=request, code_line_no=169 | [stanza_name="CarbonBlack"] Invoking request to [https://defense-prod05.conferdeploy.net/integrationServices/$ 2020-09-23 23:44:40,335 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=engine.py, func_name=_on_post_process, code_line_no=164 | [stanza_name="CarbonBlack"] Skip post process condition satisfied, do nothing 2020-09-23 23:44:40,335 +0000 log_level=INFO, pid=11719, tid=Thread-4, file=engine.py, func_name=_update_checkpoint, code_line_no=178 | [stanza_name="CarbonBlack"] Checkpoint not specified, do not update it.   I am curious if the lack of a checkpoint reference is preventing the API from returning data to the TA, but I didn't find any documentation related to the checkpoint or how to debug the TA log file.  Where do I go from here?  
Hi, I would like to know step-by-step how to instrument TIBCO BWCE apps running in Docker containers on AWS ECS (Fargate).  Thank you.
As described in the subject.  Most of panels in Overview against the older version instances which are yet to be upgraded to 8.0.6 are not populated correctly.  Resource Usage tab also shows many pan... See more...
As described in the subject.  Most of panels in Overview against the older version instances which are yet to be upgraded to 8.0.6 are not populated correctly.  Resource Usage tab also shows many panels and information are broken too. When I tried to run the same broken searches  seprately, it has error messages like, [indexer01] Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/server/info?count=0&strict=false from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API    
I have this add-on "TA Microsoft Windows Defender" installed in our UFs using a deployment server, all configuration is the same in all UFs but some of them are working (sending logs to Splunk Cloud)... See more...
I have this add-on "TA Microsoft Windows Defender" installed in our UFs using a deployment server, all configuration is the same in all UFs but some of them are working (sending logs to Splunk Cloud) and the others are not. I can see all servers are successfully sending other eventlog events, system, application, security, but some of them are not sending windows defender logs. Core functionality is working with no errors related to the defender TA as well.  I have this on windows server 2016.  Thanks!  
Hi All,   I'm using DB Connect 3.x - I want to create a template for future MS-SQL connections to speed the process up for others. I've read I can accomplish this by moving db_inputs_template.conf f... See more...
Hi All,   I'm using DB Connect 3.x - I want to create a template for future MS-SQL connections to speed the process up for others. I've read I can accomplish this by moving db_inputs_template.conf from add-on/default to dbconnect/local.    In my dev region I have DBX 3.x installed and the MS-SQL add on installed as well. However due to some limitations and constraints of the region I am unable to test the functionality.    So I am wondering if I can just copy/ touch the db_inputs_templates.conf in dbconnect/local and have it populate the drop down with the templates without the addon(s) installed?
I'm looking to set up a field transformation (with no access to transforms.conf) for my department's logs to be able to differentiate our data centers and environments. For some reason, the configura... See more...
I'm looking to set up a field transformation (with no access to transforms.conf) for my department's logs to be able to differentiate our data centers and environments. For some reason, the configuration I've put in doesn't appear to be working, though no errors are returned. Setup (made generic): A tag is configured on 'host' to be one of the following: app_prod_njw, app_prod_bel, app_preprod_njw, or app_preprod_bel A new private field transformation has been created with the following configuration: name: app-env-datacenter type: regex-based Regular expression: app_(?<env>\w+)_(?<datacenter>.+) Format: env::$1 datacenter::$2 Source key: tag::host "Create multivalued fields" is not selected "Automatically clean field names" is not selected When I run a given search for my logs, however, I'm not seeing the field env or the field datacenter.  I've also double-checked my regex and checked with a couple  Is there some inherent misunderstanding in this setup? Is it not possible to do this against 'tag::host' as a source key? Any and all help is appreciated!!! ~ Jen