All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, In my classic dashboards, I have created input fields with in my panel so that I can use them only for that specific panel. However, with dashboard studio I am able to create inputs at global... See more...
Hello, In my classic dashboards, I have created input fields with in my panel so that I can use them only for that specific panel. However, with dashboard studio I am able to create inputs at global level only and I am unable to place them next to my panels unlike classic version. Can anyone help me, if you already tried achieving this? Thank You.
How to include Javascript files from another Javascript file from the local appserver/static folder The below code somehow does not work:   require([ 'jquery', './myCustomUtils', 's... See more...
How to include Javascript files from another Javascript file from the local appserver/static folder The below code somehow does not work:   require([ 'jquery', './myCustomUtils', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'], function($, myCustomUtils, mvc, _){  
Dear experts, I've created an alert based on a message string to identify closed connections . However, alert gets triggered only once although the problem doesn't get fixed until we bounce. Look... See more...
Dear experts, I've created an alert based on a message string to identify closed connections . However, alert gets triggered only once although the problem doesn't get fixed until we bounce. Looking for a query to have an recurring alert, until I see success message string as "*reconfigured with 'RabbitMQ' bean*" as the latest in comparison to the failed strings across all events. Failed messages:  *com.rabbitmq.client.ShutdownSignalException* OR "*"channel shutdown*" Success message: "*reconfigured with 'RabbitMQ' bean*" Current Alert query that occurs only once: index IN ("devcf","devsc") cf_org_name IN(xxxx,yyyy) cf_app_name=* "rabbit*" AND ("channel shutdown*" OR "*com.rabbitmq.client.ShutdownSignalException*" OR "*rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error*") |stats count by cf_app_name, cf_foundation Thank you for the help
Hi,  We are doing consults via API with the query:  index=_internal sourcetype=splunk_python subject |fields subject,recipients | where subject like "%ORION%"' But when we try to collect the re... See more...
Hi,  We are doing consults via API with the query:  index=_internal sourcetype=splunk_python subject |fields subject,recipients | where subject like "%ORION%"' But when we try to collect the results, give us an error from a lookup that is not part of the original search. Via web the search is working fine, but the problem is via API. Have you experienced something like that? Thanks!
How do I integrate AppDynamics with ServiceNow without using any plugin. Can I integrate AppDynamics with ServiceNow using custom rest API and how?
hai all, while searching splunk roles data using rest API | rest /services/authentication/users splunk_server=local is there any way to create dashboard for to check current login users into splu... See more...
hai all, while searching splunk roles data using rest API | rest /services/authentication/users splunk_server=local is there any way to create dashboard for to check current login users into splunk? Thanks
Hi all, We have a case where we want to restrict a user from editing or updating  a input parameter . We have created a addon using Splunk's Add-on builder (v4.x), the addon takes couple of Data ... See more...
Hi all, We have a case where we want to restrict a user from editing or updating  a input parameter . We have created a addon using Splunk's Add-on builder (v4.x), the addon takes couple of Data Input Parameters, which include a parameter that has to be disabled and should not be allowed to be updated. Is there a way we can update the code or does addon builder provide a option to perform such functionality? Any help or suggestions will be highly appreciated.   Thanks, Jabez.
Hello Splunk Community, I have the following search command:   index="myIndex" host="myHost" myScript Running OR Stopped | eval running= if(like(_raw, "%Running%"), 1, 0) | eval stopped= if(li... See more...
Hello Splunk Community, I have the following search command:   index="myIndex" host="myHost" myScript Running OR Stopped | eval running= if(like(_raw, "%Running%"), 1, 0) | eval stopped= if(like(_raw, "%Stopped%"), 0, 1) | table _time running stopped | rename running AS "UP" | rename stopped AS "DOWN"   It looks strange like this: There are four events with "Stopped" in it, the rest are all "Running". The script logs either Running or Stopped every 5 minutes. When I hover over the line it reports Down as 1 the entire time even though it should be 0 and only 1 four times.  How do I adjust this so that it looks like this: -------------_---------------- ________--_________ Where the upper line = Running And the bottom line = Stopped        
  Good morning all please i'm in a big das that i can't solve it: i'm a student and i'm preparing my graduation project and it's my first time with splunk I want to know if my steps are correct o... See more...
  Good morning all please i'm in a big das that i can't solve it: i'm a student and i'm preparing my graduation project and it's my first time with splunk I want to know if my steps are correct or not I want to analyze the user accounts of my active directory: I want to work only on the information concerning the connection of the accounts (login, log off...) and also (creation, modification, deletion..) for that I installed on my splunk server the 3 apps: Splunk_TA_windows Splunk_TA_microsoft_ad SA-ldapsearch (I don't know why I can't save the domain password on this add on despite the connection being successful) after that I copied the 2 folders "Splunk_TA_windows" and "Splunk_TA_microsoft_ad" to my AD server in forrwadersplunk folder path after that I configured this input file and I copied it to a new "local" folder on the 2 servers ************************ ###### Monitor Inputs for Active Directory ###### [monitor://C:\debug\netlogon.log] sourcetype=MSAD:NT6:Netlogon disabled=0 renderXml=false index=main [WinEventLog://Security] disabled = 0 index=main start_from oldest current_only = 0 evt_resolve_ad_obj = 1 Interval checkpoint = 5 whitelist=4724,4725,4726,4624,4625,4720,4732,4722,4738,4742,4729,4715,4719,4768,4769 blacklist1 = EventCode="4662" Message="Object Type: (?!\s*group Policy Container)" blacklist2 = EventCode="566" Message="Object Type: (?!\s*group PolicyContainer)" renderXml=false [WinEventLog://Microsoft-windows-Terminalservices-LocalSessionManager/operational] disabled = 0 index=main renderXml=false ****************** Am I missing another step?? is the input file configuration correct?? can I have my needs with this configuration ??? thank you for answering me because I can not find the right answer on the net and I have a big problem: I find incomplete information on some users when I launch searches concerning their opening and closing of sessions. I apologize for this long message but I must explain all the details to you to have the best advice
Hi ,  Noticed this failure in the app inspect report(Version 2.22.0), Is there a way we can fix this on splunk cloud ? Below is the failure details in the report: Please check for inbound or ou... See more...
Hi ,  Noticed this failure in the app inspect report(Version 2.22.0), Is there a way we can fix this on splunk cloud ? Below is the failure details in the report: Please check for inbound or outbound UDP network communications.Any programmatic UDP network communication is prohibited due to security risks in Splunk Cloud and App Certification.The use or instruction to configure an app using Settings -> Data Inputs -> UDP within Splunk is permitted. (Note: UDP configuration options are not available in Splunk Cloud and as such do not impose a security risk. File: bin/botocore/session.py Line Number: 204   Thanks, Jabez.  
hello,  sendemail can not work variable fields. example,     index=mail | table id domain | eval email=id."@abc.com" | sendemail to="$email$" subject="test" sendresult=true inline=true mess... See more...
hello,  sendemail can not work variable fields. example,     index=mail | table id domain | eval email=id."@abc.com" | sendemail to="$email$" subject="test" sendresult=true inline=true message="test"     >> command="sendemail", {} while sending mail to:       index=_internal email     >> ERROR sending email. subject="test", results_line="None", recipients="[]", server="localhost"   why can't I identify my email address? it works normally when i enter email address.  
how do I change the x axis label rotation in dashboard studio? I added the following line to the visualizations options but nothing changes "xAxisLabelRotation" : 90
Hi All,  We are facing a issue when running app inspect(version 2.22.0) report on our app, below is the failure we see: Embed all your app's front-end JS dependencies in the /appserver directory.... See more...
Hi All,  We are facing a issue when running app inspect(version 2.22.0) report on our app, below is the failure we see: Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['moment'] File: /tmp/tmpylhrwm6y/cisco-cloud-security/appserver/static/vendor/chartjs/Chart.min.js Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['moment'] File: /tmp/tmpylhrwm6y/cisco-cloud-security/appserver/static/vendor/chartjs/Chart.js Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['moment'] File: /tmp/tmpylhrwm6y/cisco-cloud-security/appserver/static/vendor/daterangepicker/daterangepicker.min.js As per the error description we have placed all our Js dependencies under /appserver folder, but we still see this failure. Any help or suggestion are highly appreciated.   Thanks, Jabez.
Hi at all, I found a difference in visualization of Cluster Map between dashboard and Search (or refreshing the panel) using the same data and search. In dashboard in Search or refreshing ... See more...
Hi at all, I found a difference in visualization of Cluster Map between dashboard and Search (or refreshing the panel) using the same data and search. In dashboard in Search or refreshing the panel: As you can see the same data give different visualization . Anyone has some idea why this happens? Ciao. Giuseppe
Hello All, I would like to be able to track down any and every configuration change on our monitored DC, AD etc. I need to be able to see those changes in comparison fashion - week to week. I a... See more...
Hello All, I would like to be able to track down any and every configuration change on our monitored DC, AD etc. I need to be able to see those changes in comparison fashion - week to week. I aim to create an auto report that would run every Monday for instance and let us know of any changes being introduced on the monitored assets (focusing on OS type; Linux, Windows and their previous and current versions). Thank you All in advance! D 
Hi Everyone, I need to migrate the report from sumo logic to splunk . In sumo logic report we have time compare option The compare operator allows you to compare current search results with data f... See more...
Hi Everyone, I need to migrate the report from sumo logic to splunk . In sumo logic report we have time compare option The compare operator allows you to compare current search results with data from a past time period for aggregate searches For eg : if you wanted to compare the behavior of backfill errors count with the span of 5min of events per hour  along with the timeshift 3min . it gives the count of events for every 5min along with the count at 3 min prior to that events .The compare operator allows you to compare current search results with data from a past time period for aggregate searches How to achieve this in Splunk ? Here is the sample sumo logic query  (_sourceCategory=app (error OR fail*) AND exception) | "Quote Sequences Error"as ALERT_DESC | _sourcecategory as SUMO_SOURCE_CATEGORY | "APP-PROD" as APP_ID | _sourcehost as APP_SERVER_NAME | _sourcename as APP_SOURCE_CATEGORY | _sourcecategory as SUMO_SOURCE_CATEGORY | timeslice 3m | count by _timeslice,APP_ID,APP_SERVER_NAME,APP_SOURCE_CATEGORY,SUMO_SOURCE_CATEGORY,ALERT_DESC | formatDate(_timeslice, "HH:mm:ss:SSS") as EventTime | if(_count > "100","1", if(_count > "50","2", if(_count > "3" and EventTime > "12:00:00" and EventTime < "05:00:00", "4", if(_count > "3", "3","0")))) as sumo_severity | format ("%s total errors in the last 3 minutes", _count) as notes | compare with timeshift 3m | if (isBlank(sumo_severity_3m) , "0", sumo_severity_3m) as sumo_severity_3m | where sumo_severity != sumo_severity_3m and !(isblank(sumo_severity)) | sort by _timeslice desc | fields - EventTime, EventTime_3m  
I just upgraded Splunk ES from 6.2.0 to 7.0.1 on Splunk Core version 8.1.5. However, some of the dashboards like Cloud Security, predictive analytics, Executive Summary, SOC Operations are not visi... See more...
I just upgraded Splunk ES from 6.2.0 to 7.0.1 on Splunk Core version 8.1.5. However, some of the dashboards like Cloud Security, predictive analytics, Executive Summary, SOC Operations are not visible in any navigation menus. While, the exec summary and soc operations dashboards are accessible ES app's dashboords page, however, others such as predictive analytics and cloud security are not. Infact, cloud security does not show up as a navigation menu option.
Have a search that returns emails of interest (possibly malicious). Trying to add a subsearch that will return a count of how many times each sender address has been seen in the last 30 days (regardl... See more...
Have a search that returns emails of interest (possibly malicious). Trying to add a subsearch that will return a count of how many times each sender address has been seen in the last 30 days (regardless of the timeframe used in the main search). When using the search below, Splunk returns a "Error in eval command: Fields cannot be assigned a boolean result" error based on the eval command. The tstats command works fine independently. index=proofpoint | rex field=msg.header.reply-to{} ".*\<(?<Sender_Address>[a-zA-Z0-9\.\-\+]+@[a-zA-Z0-9\.\-]+)\>" | eval Sender_Count=[ | tstats count where index=proofpoint TERM($Sender_Address$) earliest=-30d@m latest=now] | table _time msg_header_from msg.header.reply-to{} Sender_Address Sender_Count   Don't worry about the sub-optimal email matching regex - just POC. Tried appendcols, too, with no luck. Is this possible? Thank you
Good Day Splunkers! I've been banging me my trying to capture all email address as recipients. Is this even possible?    "status":"delivered":"recipient":"some.name@mail.com":"subject":"Thank you!"... See more...
Good Day Splunkers! I've been banging me my trying to capture all email address as recipients. Is this even possible?    "status":"delivered":"recipient":"some.name@mail.com":"subject":"Thank you!":"   "status":"delivered":"recipient":"some.middle.name@mail.com":"subject":"Thank you!":"   "status":"delivered":"recipient":"\"Name Some \"<some.name@mail.com>":"subject":"Thank you!":"   "status":"delivered":"recipient":"some.name@mail.com,  another.name@mail.com,  more.names@mail.com subject":"Thank you!":"   "recipient":"\"different.name@mail.com\" <different.name@mail.com>, \"same.name@mail.com\"<same.name@mail.com>":"subject":"Thank you!":"  
I am searching a new source of json data sent to Splunk (over HEC), and it is very, very slow. Searching over just the past 4 hours shows 726,405 events . The search took  3  1/2 minutes.  Job insp... See more...
I am searching a new source of json data sent to Splunk (over HEC), and it is very, very slow. Searching over just the past 4 hours shows 726,405 events . The search took  3  1/2 minutes.  Job inspector shows the most time (almost all of it) is being spent on command.search.kv. Does Splunk have problems searching / extracting fields from larger json events?  Is there an event length at which Splunk starts to have issues?  I looked at the length of all events from this source over a 24 hour period, and the length of a majority of them is 1,000-1,999. Event Length Event Count <1000 2,452 1,000-1,999 2,043,605 2,000-2,000 2,236 3,000-3,999 590 9,000-9,999 5   The json data is properly formatted - it is valid json.  Splunk is able to extract the fields, and I also checked with an online json format validator.