All Topics

Top

All Topics

Hi All, we are using Netapp cloud secure add on to collect data from cloud secure data and we have configured input but not getting all data below is the configuration please suggest if anything t... See more...
Hi All, we are using Netapp cloud secure add on to collect data from cloud secure data and we have configured input but not getting all data below is the configuration please suggest if anything to add in configuration.   [cloud_secure_alerts://******] builtin_system_checkpoint_storage_type = auto entityaccessedtime = 1635795607850 index = main interval = 60 netapp_secure_insight_fqdn = ********.cloudinsights.netapp.com sourcetype = netapp:cloud_secure:alerts        
I am trying to configure NEAP policy action rules  to integrate servicenow incident comments by passing a token, but it looks Splunk doesn't support tokens in NEAP action rules. I heard there is some... See more...
I am trying to configure NEAP policy action rules  to integrate servicenow incident comments by passing a token, but it looks Splunk doesn't support tokens in NEAP action rules. I heard there is some custom script would pass the tokens, does anybody have idea on this customization part and how we can achieve it ?
We have enabled Bidirectional correlation search for Service now in our ITSI, unfortunately  itsi_notable_event_external_ticket  lookup is not updating proper values. I couldn't find the saved search... See more...
We have enabled Bidirectional correlation search for Service now in our ITSI, unfortunately  itsi_notable_event_external_ticket  lookup is not updating proper values. I couldn't find the saved search which is used to update the lookup to troubleshoot further. Can some one tell me how itsi_notable_event_external_ticket lookup is being updated ?
I have borrowed a search from an earlier question to help give kWh information on a given month. How can I modify the search to show only the host_name and the sum total of the avg_kWh column? inde... See more...
I have borrowed a search from an earlier question to help give kWh information on a given month. How can I modify the search to show only the host_name and the sum total of the avg_kWh column? index=network sourcetype=zabbix metric_name="st4InputCordActivePower" host_name="pdu02.LON5.Contoso.com" | bin _time span=1h | stats count as samples sum(value) as watt_sum by _time | eval kW_Sum=watt_sum/1000 | eval avg_kWh=kW_Sum/samples | addcoltotals   2022-05-30 18:00 12 44335.0 3.69458 44.3350 ....         2022-05-31 23:00 12 43489.0 3.62408 43.4890   7686 27425688.0 2595.96346 27425.6880   
Hi all, I'm hoping that someone can help / point me in the right direction. I have two events which are being fed into Splunk, one being a raise of an event flag, the other being the removal of the... See more...
Hi all, I'm hoping that someone can help / point me in the right direction. I have two events which are being fed into Splunk, one being a raise of an event flag, the other being the removal of the event flag. Raising Sep 2 10:32:45 SOFTWARE CEF:0|SOFTWARE|CLIENT|42|Agent Log Event|Agent Log Event|high|id=123 shost=Management start=2022-09-02 10:32:42 cs1Label=Affected Agents cs1=[SERVERNAME] (ip: None, component_id: ID) msg='AgentMissing' status flag was raised Removal Sep 2 10:34:33 SOFTWARE CEF:0|SOFTWARE|CLIENT|42|Agent Log Event|Agent Log Event|high|id=123 shost=Management start=2022-09-02 10:34:33 cs1Label=Affected Agents cs1=[SERVERNAME] (ip: None, component_id: ID) msg='AgentMissing' status flag was removed After some browsing online & through the Splunk support pages I have been able to put together the following query:     (index=[INDEX] *agentmissing*) ("msg='AgentMissing' status flag was raised" OR "msg='AgentMissing' status flag was removed") | rex field=_raw ".*\)\s+(?<status>.*)" | stats latest(_time) as flag_finish by connection_type | join connection_type [ search index=[INDEX] ("msg='AgentMissing' status flag was raised") connection_type=* | stats min(_time) as flag_start by connection_type] | eval difference=flag_finish-flag_start | eval flag_start=strftime(flag_start, "%Y-%m-%d %H:%M") | eval flag_finish=strftime(flag_finish, "%Y-%m-%d %H:%M") | eval difference=strftime(difference,"%H:%M:%S") | table connection_type, flag_start, flag_finish, difference | rename connection_type as Hostname, flag_start as "Flag Raised Time", flag_finish as "Flag End Time", difference as "Total Time" | sort - difference     The above is working, however as I am using the "stats latest" command it is only showing the latest occurrence of the event. However, I would like to display the time between these events for multiple occurrences. So as an example of the above, it was between 7:47 & 9:31, I would also like to see flags for other time occurrences. TIA!
I have to decrease the fields names font size, like subgroup, platforms, bkcname etc.. (all fields present in the table) & make the count bold which is present in table.   But i want change only in... See more...
I have to decrease the fields names font size, like subgroup, platforms, bkcname etc.. (all fields present in the table) & make the count bold which is present in table.   But i want change only in one particular table, not all the tables presents in the dashboard. <row> <panel> <title>Platform wise Automation Status Summary</title> <table> <search> <query>index=network_a I want change the above table(Platform wise Automation Status Summary) Any help would be greatly appreciated!!
Hi, I'm trying to extract some fields from my Access Point Aruba in order to be CIM compliant. For authentication log I have two kinds of event:   Login failed: cli[5405]: <341004> <WARN> AP:... See more...
Hi, I'm trying to extract some fields from my Access Point Aruba in order to be CIM compliant. For authentication log I have two kinds of event:   Login failed: cli[5405]: <341004> <WARN> AP:ML_AP01 <................................>  Client 60:f2:62:8c:a8:a7 authenticate fail because RADIUS server authentication failure Login success: stm[5434]: <501093> <NOTI> AP:ML_AP01 <..................................> Auth success: 60:f2:62:8c:a8:a7: AP ...................................ML_AP01   My goal is to extract the mac address after "Client" in the first log and the mac after "Auth success" in the second one in a common field called "src", can someone please help me? Thanks in advance!
Hi all, i have the json data as below.   { "Info": { "Unit": "ABC", "Project": "XYZ", "Analysis Summary": { "DB 1":{"available": "1088kB","use... See more...
Hi all, i have the json data as below.   { "Info": { "Unit": "ABC", "Project": "XYZ", "Analysis Summary": { "DB 1":{"available": "1088kB","used": "172.8kB","used%": "15.88%","status":"OK"}, "DB2 2":{"available": "4096KB","used": "1582.07kB","used%": "38.62%","status":"OK"}, "DB3 3":{"available": "128KB","used": "0","used%": "0%","status":"OK"}, "DB4 4":{"available": "16500KB","used": "6696.0KB","used%": "40.58%","status":"OK"}, "DB5 5":{"available": "22000KB","used": "9800.0KB","used%": "44.55%","status":"OK"} } }}   I want to create a table like this   Database available used used% status DB1 4096KB 1582.07kB 38.62% OK DB2 1088kB 172.8kB 15.88% OK DB3 16500KB 6696.0KB 40.58% OK DB4 22000KB 9800.0KB 44.55% OK DB5 128KB 0 0% OK   I know how to extract the data but i am not able to put data in this format in table. Anyone have idea on this?
Hi, I have installed the Splunk forwarder in AIX server and successfully see the server level results(CPU ,DF ,Memory) in the dashboard, But am planning to install the Splunk add on for WebSphere pr... See more...
Hi, I have installed the Splunk forwarder in AIX server and successfully see the server level results(CPU ,DF ,Memory) in the dashboard, But am planning to install the Splunk add on for WebSphere process server 7.0 version, May I know "Splunk Add on WebSphere application server" will for the older version  of WebSphere process server? Your inputs will be appreciated .
Hello Splunk Enjoyers! I have problem Information about routers arrives every minute, so  What I have:  name_of_router and serial_number of client on index = routers What i want: i want to make... See more...
Hello Splunk Enjoyers! I have problem Information about routers arrives every minute, so  What I have:  name_of_router and serial_number of client on index = routers What i want: i want to make alert, if serial_number has changed.  How should i do this? @splunk     
Hi all,  I wish to generate login times for a list of users which are specified in a lookup table titled user_list.csv. The column header of the list of users in this list is called "IDENTITY". C... See more...
Hi all,  I wish to generate login times for a list of users which are specified in a lookup table titled user_list.csv. The column header of the list of users in this list is called "IDENTITY". Currently, I have an index that on its own without inserting the lookup table, already has a field called "Identity". This index itself gives me any users' login times within the specified timeframe as long as I specify Identity="*". Without specifying Identity="*" or any other user's names, the events will not populate. What I am trying to do is to input a specified list of users and be able to check their login times. However when I use the following search query, I end up getting 0 events:   index=logintime  [|inputlookup user_list.csv |fields IDENTITY |format] IDENTITY="*" | table _time, eventType, ComputerName, IDENTITY   I have already checked that the lookup table is within the same app. Please help, thank you.
Hi, I have a metric with 1 dimension containing an integer value. I need to apply some calculation to the metric based on the dimension value. The formula to apply to each Data Point would be s... See more...
Hi, I have a metric with 1 dimension containing an integer value. I need to apply some calculation to the metric based on the dimension value. The formula to apply to each Data Point would be sth like this:     metric_value*100/dimensionA_value   I have seen dimensions extensively used as filters but I was not able to find a way to reference the dimension value so that I can use it in a calculation like the one above.   Any idea, how could I accomplish that?   Thanks in advance Cesar
I am getting  "The search job terminated unexpectedly" in the dashboard. In search, the index is working fine.  And this happens in one dashboard only other dashboards are working fine. Anoth... See more...
I am getting  "The search job terminated unexpectedly" in the dashboard. In search, the index is working fine.  And this happens in one dashboard only other dashboards are working fine. Another Dashboard I don't know what the reason for this issue is. Please anyone help me. Thanks in Advance
Hi , How to extract the open episodes with service now incident against each episode in Splunk itsi Thanks!  
Finally we migrated away for Microsoft Azure Add-on for Splunk to Splunk Add-on for Microsoft Cloud Services. In Microsoft Azure Add-on for Splunk  Inputs conf.  it was possible to specify manually... See more...
Finally we migrated away for Microsoft Azure Add-on for Splunk to Splunk Add-on for Microsoft Cloud Services. In Microsoft Azure Add-on for Splunk  Inputs conf.  it was possible to specify manually Event Hub Sourcetype, but in Splunk Add-on for Microsoft Cloud Services  we need to choose  the value.  The problem  is that we need the values azure:ad_signin:eventhub and azure:ad_audit:eventhub  but Splunk Add-on for Microsoft Cloud Services provides only mscs:azure:eventhub.   Based on log information from Azure  there is Category field with the values (SignInLogs,AuditLogs).  And from it I can specify which is Audit log and which is Signin log and change SourceType for each of log type. On Heavy Forwarder where App is deployed  (/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/default)  i added the following config. But nothing changed source type stays mscs:azure:eventhub. Any ideas what I'm missing? props.conf [mscs:azure:eventhub] TRANSFORMS-rename = SignInLogs,AuditLogs transforms.conf [SignInLogs] REGEX =  SignInLogs SOURCE_KEY = field:category DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::azure:ad_signin:eventhub WRITE_META = true [AuditLogs] REGEX =  AuditLogs SOURCE_KEY = field:category DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::azure:ad_audit:eventhub WRITE_META = true    
Hi All, Is there a way in which Splunk can generate an alert when backup and restoration exercises are conducted. Any use case that can do this? Any assistance on this would be appreciated.
Hi, is there a way to use CSS to fix the font size of text in the Status Indicator?
Hi  We are using Vmware carbon black cloud app and the vmware logs are pulled from AWS s3 buckets. The index is having logs. However, the dashboards of the app when configured with same index is no... See more...
Hi  We are using Vmware carbon black cloud app and the vmware logs are pulled from AWS s3 buckets. The index is having logs. However, the dashboards of the app when configured with same index is not working. Please help remediate. Thanks  
Splunk Lantern is a customer success center providing advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started... See more...
Splunk Lantern is a customer success center providing advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles which help you see everything that’s possible with data sources and data types in Splunk. Our library is constantly growing, and we’ve got a fresh new batch of articles to share with you! Here’s a full breakdown of everything we’ve published in the past month. New Data Articles Splunk Lantern’s Data Type and Data Source link you to all of the relevant apps and add-ons you’ll need to work with, as well as listing out all of the use cases we have for that data descriptor. These articles are great if your deployment is already ingesting a data source and want to see what other use cases you can accomplish with it, or if you’re curious about what you could gain through ingesting a new data source or type of data into your deployment. This month we’ve launched a new data source article for Syslog and a new in-depth guide that helps you set up a Windows-only computer network to run Splunk Connect for Syslog (SC4S) on a Windows server. Together with another article,s we published a few months ago, Understanding best practices for Splunk Connect for Syslog, these new articles provide you with a solid base of information that helps you implement S4CS smoothly and efficiently. AWS: Migrating inputs to Data Manager is another new article that shows you how to use Splunk’s Data Manager to improve your existing processes for onboarding AWS data, or help you onboard this data source easily if you’re looking to ingest it into your deployment for the first time. Check it out if this data source is one you’d like to explore further, and don’t forget to take a look at our other AWS data source articles too for more information about the use cases you can achieve. Getting started with the Splunk App for Ethereum is a new addition to our range of Blockchain articles, with this new guide walking you through how to set up and use the dashboards, macros, and searches in this app. 9.0.1 Updates and Product Learning One of our most popular new articles this month is our Splunk 9.0.1 FAQ, which covers the most commonly asked questions from Splunk's August 2022 security advisories that can be addressed by upgrading to Splunk Enterprise 9.0.1. While you should also check the Splunk Product Security page for the latest updates, this FAQ covers specific questions that Splunk Enterprise and Splunk Cloud Platform users might have. Another handy piece of product learning that’s just gone live is Preventing concurrency issues and skipped searches in Enterprise Security. Multiple, simultaneous correlation searches can cause search concurrency issues and skipped searches, so they should be scheduled differently, and this article provides you with a step-by-step guide so you can be sure you’re configuring your searches correctly to prevent this issue. New Security and Observability Articles Identifying high-value assets and data sources is a fresh addition to our Use Case Explorer for Security, which is designed to help you identify and implement prescriptive Security use cases that drive incremental business value. This article helps you prepare for attacks that specifically target your organization's high-value assets, preventing disruption to business continuity, reputational, or regulatory risk. On the Observability side, we’ve published two articles this month that help you work with Content Packs for Splunk IT Service Intelligence or IT Essentials Work. Gaining better visibility into your third-party APM solutions shows you how you can use the Content Pack for Third-party APM to gain insights across legacy APM vendors. Gaining better visibility into Microsoft Exchange explains how you can use the Content Pack for Microsoft Exchange to see everything going on across your Microsoft Exchange environment, so you can find and fix issues quickly. Finally, Monitoring AWS Fargate deployments powered by Graviton2 processors shows you how you can use Splunk software to track AWS Fargate clusters, SLA resource utilization, identify the root cause for task crashes, and create alerts and respond to them in real-time. What else? We’ve launched a new feedback widget on our site! This tab on the left-hand side allows you to tell us how articles are working for you, or where improvement is needed.    The survey is completely anonymous, so you won’t be able to receive a direct response to any comments you leave - however, you can always talk to us directly at Splunk User Groups Slack or Reddit. Please take the time to leave feedback on our articles so we can make sure our content is effective in helping you succeed with Splunk! Lastly, if you have been accessing Splunk Lantern articles using the knowledge bots of the Splunk Product Guidance app in the Splunk Cloud Platform, please note that those bots have been removed based on feedback. We apologize if you found those bots helpful, but don't worry - none of the great content has gone away. You can still search for help with SPL and data source onboarding at any time on lantern.splunk.com. We hope you’ve found this update helpful. Thanks for reading! — Kaye Chapman, Customer Journey Content Curator for Splunk Lantern
HI, I have a scheduled alert that sends out an email every 7days. The Sys admin turned off the server for whatever reason and forgot to turn it back up. Obviously, the report didn't trigger.  Is ... See more...
HI, I have a scheduled alert that sends out an email every 7days. The Sys admin turned off the server for whatever reason and forgot to turn it back up. Obviously, the report didn't trigger.  Is it possible to get/ generate the report that was supposed to come in? Am at a loss here just finding out today. Thanks