All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello,  sendemail can not work variable fields. example,     index=mail | table id domain | eval email=id."@abc.com" | sendemail to="$email$" subject="test" sendresult=true inline=true mess... See more...
hello,  sendemail can not work variable fields. example,     index=mail | table id domain | eval email=id."@abc.com" | sendemail to="$email$" subject="test" sendresult=true inline=true message="test"     >> command="sendemail", {} while sending mail to:       index=_internal email     >> ERROR sending email. subject="test", results_line="None", recipients="[]", server="localhost"   why can't I identify my email address? it works normally when i enter email address.  
how do I change the x axis label rotation in dashboard studio? I added the following line to the visualizations options but nothing changes "xAxisLabelRotation" : 90
Hi All,  We are facing a issue when running app inspect(version 2.22.0) report on our app, below is the failure we see: Embed all your app's front-end JS dependencies in the /appserver directory.... See more...
Hi All,  We are facing a issue when running app inspect(version 2.22.0) report on our app, below is the failure we see: Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['moment'] File: /tmp/tmpylhrwm6y/cisco-cloud-security/appserver/static/vendor/chartjs/Chart.min.js Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['moment'] File: /tmp/tmpylhrwm6y/cisco-cloud-security/appserver/static/vendor/chartjs/Chart.js Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['moment'] File: /tmp/tmpylhrwm6y/cisco-cloud-security/appserver/static/vendor/daterangepicker/daterangepicker.min.js As per the error description we have placed all our Js dependencies under /appserver folder, but we still see this failure. Any help or suggestion are highly appreciated.   Thanks, Jabez.
Hi at all, I found a difference in visualization of Cluster Map between dashboard and Search (or refreshing the panel) using the same data and search. In dashboard in Search or refreshing ... See more...
Hi at all, I found a difference in visualization of Cluster Map between dashboard and Search (or refreshing the panel) using the same data and search. In dashboard in Search or refreshing the panel: As you can see the same data give different visualization . Anyone has some idea why this happens? Ciao. Giuseppe
Hello All, I would like to be able to track down any and every configuration change on our monitored DC, AD etc. I need to be able to see those changes in comparison fashion - week to week. I a... See more...
Hello All, I would like to be able to track down any and every configuration change on our monitored DC, AD etc. I need to be able to see those changes in comparison fashion - week to week. I aim to create an auto report that would run every Monday for instance and let us know of any changes being introduced on the monitored assets (focusing on OS type; Linux, Windows and their previous and current versions). Thank you All in advance! D 
Hi Everyone, I need to migrate the report from sumo logic to splunk . In sumo logic report we have time compare option The compare operator allows you to compare current search results with data f... See more...
Hi Everyone, I need to migrate the report from sumo logic to splunk . In sumo logic report we have time compare option The compare operator allows you to compare current search results with data from a past time period for aggregate searches For eg : if you wanted to compare the behavior of backfill errors count with the span of 5min of events per hour  along with the timeshift 3min . it gives the count of events for every 5min along with the count at 3 min prior to that events .The compare operator allows you to compare current search results with data from a past time period for aggregate searches How to achieve this in Splunk ? Here is the sample sumo logic query  (_sourceCategory=app (error OR fail*) AND exception) | "Quote Sequences Error"as ALERT_DESC | _sourcecategory as SUMO_SOURCE_CATEGORY | "APP-PROD" as APP_ID | _sourcehost as APP_SERVER_NAME | _sourcename as APP_SOURCE_CATEGORY | _sourcecategory as SUMO_SOURCE_CATEGORY | timeslice 3m | count by _timeslice,APP_ID,APP_SERVER_NAME,APP_SOURCE_CATEGORY,SUMO_SOURCE_CATEGORY,ALERT_DESC | formatDate(_timeslice, "HH:mm:ss:SSS") as EventTime | if(_count > "100","1", if(_count > "50","2", if(_count > "3" and EventTime > "12:00:00" and EventTime < "05:00:00", "4", if(_count > "3", "3","0")))) as sumo_severity | format ("%s total errors in the last 3 minutes", _count) as notes | compare with timeshift 3m | if (isBlank(sumo_severity_3m) , "0", sumo_severity_3m) as sumo_severity_3m | where sumo_severity != sumo_severity_3m and !(isblank(sumo_severity)) | sort by _timeslice desc | fields - EventTime, EventTime_3m  
I just upgraded Splunk ES from 6.2.0 to 7.0.1 on Splunk Core version 8.1.5. However, some of the dashboards like Cloud Security, predictive analytics, Executive Summary, SOC Operations are not visi... See more...
I just upgraded Splunk ES from 6.2.0 to 7.0.1 on Splunk Core version 8.1.5. However, some of the dashboards like Cloud Security, predictive analytics, Executive Summary, SOC Operations are not visible in any navigation menus. While, the exec summary and soc operations dashboards are accessible ES app's dashboords page, however, others such as predictive analytics and cloud security are not. Infact, cloud security does not show up as a navigation menu option.
Have a search that returns emails of interest (possibly malicious). Trying to add a subsearch that will return a count of how many times each sender address has been seen in the last 30 days (regardl... See more...
Have a search that returns emails of interest (possibly malicious). Trying to add a subsearch that will return a count of how many times each sender address has been seen in the last 30 days (regardless of the timeframe used in the main search). When using the search below, Splunk returns a "Error in eval command: Fields cannot be assigned a boolean result" error based on the eval command. The tstats command works fine independently. index=proofpoint | rex field=msg.header.reply-to{} ".*\<(?<Sender_Address>[a-zA-Z0-9\.\-\+]+@[a-zA-Z0-9\.\-]+)\>" | eval Sender_Count=[ | tstats count where index=proofpoint TERM($Sender_Address$) earliest=-30d@m latest=now] | table _time msg_header_from msg.header.reply-to{} Sender_Address Sender_Count   Don't worry about the sub-optimal email matching regex - just POC. Tried appendcols, too, with no luck. Is this possible? Thank you
Good Day Splunkers! I've been banging me my trying to capture all email address as recipients. Is this even possible?    "status":"delivered":"recipient":"some.name@mail.com":"subject":"Thank you!"... See more...
Good Day Splunkers! I've been banging me my trying to capture all email address as recipients. Is this even possible?    "status":"delivered":"recipient":"some.name@mail.com":"subject":"Thank you!":"   "status":"delivered":"recipient":"some.middle.name@mail.com":"subject":"Thank you!":"   "status":"delivered":"recipient":"\"Name Some \"<some.name@mail.com>":"subject":"Thank you!":"   "status":"delivered":"recipient":"some.name@mail.com,  another.name@mail.com,  more.names@mail.com subject":"Thank you!":"   "recipient":"\"different.name@mail.com\" <different.name@mail.com>, \"same.name@mail.com\"<same.name@mail.com>":"subject":"Thank you!":"  
I am searching a new source of json data sent to Splunk (over HEC), and it is very, very slow. Searching over just the past 4 hours shows 726,405 events . The search took  3  1/2 minutes.  Job insp... See more...
I am searching a new source of json data sent to Splunk (over HEC), and it is very, very slow. Searching over just the past 4 hours shows 726,405 events . The search took  3  1/2 minutes.  Job inspector shows the most time (almost all of it) is being spent on command.search.kv. Does Splunk have problems searching / extracting fields from larger json events?  Is there an event length at which Splunk starts to have issues?  I looked at the length of all events from this source over a 24 hour period, and the length of a majority of them is 1,000-1,999. Event Length Event Count <1000 2,452 1,000-1,999 2,043,605 2,000-2,000 2,236 3,000-3,999 590 9,000-9,999 5   The json data is properly formatted - it is valid json.  Splunk is able to extract the fields, and I also checked with an online json format validator.
Have anyone ran across the following issue before?  I am trying to implement the Splunk SOAR app but we are not able to select the “create server” button. We have referred to the online documentati... See more...
Have anyone ran across the following issue before?  I am trying to implement the Splunk SOAR app but we are not able to select the “create server” button. We have referred to the online documentation but we do not see the option to replace each instance of phantom with splunk_app_soar. 
Dear Splunkers, I have a dashboard with 2 pie charts and one statistical table I have set drilldown on both the pie chart so that when I select the slices on either pie chart, the data on statist... See more...
Dear Splunkers, I have a dashboard with 2 pie charts and one statistical table I have set drilldown on both the pie chart so that when I select the slices on either pie chart, the data on statistical table gets refreshed accordingly Problem is when I click on any slice on 1st Pie Chart, data on table loads correctly Then I click on any slice on 2nd Pie chart, data on table includes both the previous token value along with current one I need the previous token value to be reset when I click on 2nd pie chart Could someone please help
Hello from Splunk Assist Team, Splunk Assist is a cloud-connected service for Splunk® Enterprise that puts your telemetry data to work! It inspects Splunk deployments and makes cloud-powered recomme... See more...
Hello from Splunk Assist Team, Splunk Assist is a cloud-connected service for Splunk® Enterprise that puts your telemetry data to work! It inspects Splunk deployments and makes cloud-powered recommendations to help bridge security gaps, optimize cost and improve performance in your environment. With Assist, you can: 1) detect critical issues, 2) receive in-product recommendations to fix detected issues, and 3) receive continuous improvements (i.e. no need to keep up with version upgrades and security patch upgrades). We will bring new capabilities to Assist regularly and would love to answer any questions you might have about the Assist app. Before you search through previous conversations looking for assistance, we want to provide you with some basic information and quick resources. Want to get a high-level overview without reading the docs on What Splunk Assist is? See Splunk Assist: Cloud-Powered Insights Just for You, at Your Fingertips, or view the .conf22 video presentation + demo. Want to get into the nitty-gritty details? Start with the Splunk Assist documentation.  Want to request more features? Add your ideas and vote on other ideas at the Assist ideas Portal Please reply to this thread for any questions or to get extra help!
Hi Splunk Community, I’m trying to get syslogs from my router to show up in Splunk "NetFilter IPtables”. Would be great to get this view showing "results" where it says “no results found"   I... See more...
Hi Splunk Community, I’m trying to get syslogs from my router to show up in Splunk "NetFilter IPtables”. Would be great to get this view showing "results" where it says “no results found"   I have configured the router to send syslog to my mac’s IP address, port 514 and set up a data input in Splunk Enterprise on port 514 to listen for Syslog (as opposed to “ linux_messages_syslog”) That seems to transmit because if I run "sudo tcpdump -lns 0 -w - udp and port 514 | strings” from my CLI, I get a result like this every few seconds:   x?bu? B?̫,? ?,?? ~??<132> 2022-07-25T12:13:03.696064-04:00 L4 FIREWALL[9488]: nflog_log_fw(), action=DROP reason=POLICY-INPUT-GEN-DISCARD hook=INPUT mark=136314880 IN=br2 OUT= MAC=00:00:00:00:00:00:8484:26:2b:9b:ea:bd:src=167.94.138.139 DST=xx.yy.zzz.www LEN=44 TOS=0x00 PREC=0x00 TTL=40 ID=2539 PROTO=TCP SPT=19169 DPT=8901 SEQ=3022021036 ACK=0 WINDOW=1024 RES=0x00 SYN URGP=0 OPT (MSS=1460 )    And  this seems to work to some extent in splunk because I’m also picking up the messages now in “Splunk Search and Reporting” and in CIM.   This app seems to depend on:  TA_Netfilter (I unpacked and copied the TA_Netfilter into: “/Applications/Splunk/etc/apps/TA_netfilter/“. Seems to be installed... Splunk Enterprise 6.2+ (I’m running 9.0.0.1) Splunk Common Information Model 4.3+ (I have installed 5.0.1 on Splunk Enterprise, I did not change configuration in http://localhost:8000/en-US/app/search/cim_setup) I suspect I’m missing something simple. Could you pls suggest some reasons that this Netfilter IP Tables might not be working properly to help me troubleshoot?   Thanks, Rob
Hello, We have a heavy forwarder that occasionally receives and event that exceeds the bounds of Splunk indexers. When this happens, the heavy forwarder freezes and stops sending data to the indexe... See more...
Hello, We have a heavy forwarder that occasionally receives and event that exceeds the bounds of Splunk indexers. When this happens, the heavy forwarder freezes and stops sending data to the indexers. Is there a setting to tell the heavy forwarder to discard that from the queue and keep going? Our only workaround at this time is to restart the heavy forwarder.  Thank you. This is our limits.conf:   [kv] limit = 150 indexed_kv_limit = 0​    
rex command im using:  (?:\w+\s\:\s)(?<command>[^\;]+)?\;\s(?<Datainput>[^\s]+)\s\;\s(?<Extra>[^\s]+) Data 1) command : TTY-unknown ; data ; ... ; 2) command : Random blurb ; TTY-unknown ; data ... See more...
rex command im using:  (?:\w+\s\:\s)(?<command>[^\;]+)?\;\s(?<Datainput>[^\s]+)\s\;\s(?<Extra>[^\s]+) Data 1) command : TTY-unknown ; data ; ... ; 2) command : Random blurb ; TTY-unknown ; data ; ... ; 3) command : Some other blurb ; TTY-unknown ; data ; ... ; 4) command : TTY-unknown ; data ; ... ; 5) command : TTY-unknown ; data ; ... ; When I rex this data, what i get is:   command TTY datainput 1 unknown data ... 2 Random blurb unknown data 3 Some other blurb unknown data 4 unknown data ... 5 unknown data ... What I want to get is data that doesn't skip the data in 2 and 3, so it would look like this.  How do I accomplish this?   command TTY datainput 1   unknown data 2 Random blurb unknown data 3 Some other blurb unknown data 4   unknown data 5   unknown data
I have a very large Oracle database table that is being used as a log sink for an application. There is high transaction throughput on this table. I would like to get the data in this table (not abou... See more...
I have a very large Oracle database table that is being used as a log sink for an application. There is high transaction throughput on this table. I would like to get the data in this table (not about this table) into Splunk as real time as possible. Unfortunately, I do not have access to the source in order to add a log sink directly to Splunk. I realize I could read the data and move it in batches, but I'm wondering if there are any less-intensive options such as transaction log replication. What is the best practice for moving this type of data?
Is there any controls to limit the size of a user search? The use case is Splunk Cloud and limiting a search, if it downloads for example more than 10TB from SmartyStore to the cache.
What are the list of credentials that are acceptable for Just in Time entry? Or is there a way to add to that list when creating our own apps?  Looking through the documentation for the metadat... See more...
What are the list of credentials that are acceptable for Just in Time entry? Or is there a way to add to that list when creating our own apps?  Looking through the documentation for the metadata, I'm not seeing anything. 
I converted a old Classic Dashboard over to Dashboard Studio for experimentation and implementation but one of the core features of the dashboard was a map which used markers that listed the Name, La... See more...
I converted a old Classic Dashboard over to Dashboard Studio for experimentation and implementation but one of the core features of the dashboard was a map which used markers that listed the Name, Lat, and Lon of a location. When I made the conversion over to Studio using the marker option it is no longer allowing me to show the Name of the location and only providing the Lat and Lon. Any1 have a suggestion on what needs to be changed post conversion. The code is : "visualizations": {  "viz_map_1":{ "type": "splunk.maps" "options": [ 20,  30 ], "zoom": 1, "layers": [ { "seriesColors": [ "#00FF00", "#00FF00" ], "type": "marker", "additionalTooltipFields": [ "location" ], "latitude": "> primary | seriesByName('Lat')", "longitude": "> primary | seriesByName('Lon')"