All Topics

Top

All Topics

Hi guys. I´m trying to use the configuration item field on Service Now integration in order to pass to SNOW a dinamic field, so I´m trying to use something like this $System_name.result$ but, didn´t ... See more...
Hi guys. I´m trying to use the configuration item field on Service Now integration in order to pass to SNOW a dinamic field, so I´m trying to use something like this $System_name.result$ but, didn´t work. Do you know if this field accept this format ? Thank you. Clecimar
Looking to improve your security posture and address our June 2022 security advisories? You have come to the right place, and here are some helpful resources for you! Product Security Page - Subs... See more...
Looking to improve your security posture and address our June 2022 security advisories? You have come to the right place, and here are some helpful resources for you! Product Security Page - Subscribe to get notified of all recent advisories  Improve Your Security Posture Tech Talk - Technical webinar focusing on our 9.0 security features and June 2022 security advisories Splunk Enterprise Upgrade Best Practices - Lantern page with general tips for upgrades Customer FAQ - Common questions on our recent security posture initiative  Documentation - all the juicy details on how to take action  Still have questions? *If related to these advisories, you can comment below! *If related to securing your Splunk instance, you can post a new question on this board! *If specific to Splunk Enterprise or Splunk Cloud Platform, you can post to those boards!
Watch Now Improve Your Security Posture Customers are at the center of everything we do at Splunk and security is our top priority. On top of a series of new and improved capabilities, Splunk pla... See more...
Watch Now Improve Your Security Posture Customers are at the center of everything we do at Splunk and security is our top priority. On top of a series of new and improved capabilities, Splunk platform now ships with an improved security posture. Splunk Enterprise 9.0 specifically includes three new security features, a series of automatically implemented security settings, and addresses eight security vulnerabilities with fixes that go deeper than just "patching." Splunk focused on intra-Splunk transport layer security (TLS), securing deployment server & forwarders, securing app development, and added search processing language (SPL) safeguard. Watch the Tech Talk to learn about: Get more details on our newest Security fixes and features, including Splunk Assist, Upgrade Readiness App Enhancements, and Smart Card Authentication Learn how to address the eight security vulnerabilities for June 2022 Understand how you can leverage Splunk customer success resources to help ensure your environment is secure and up-to-date
Hi, Say I have the following table: Name  2022-06-07 10:01:14 2022-06-07 22:01:13 2022-06-08 10:01:11 2022-06-08 22:01:25 2022-06-09 10:01:22 2022-06-09 22:00:59 2022-06-10 10... See more...
Hi, Say I have the following table: Name  2022-06-07 10:01:14 2022-06-07 22:01:13 2022-06-08 10:01:11 2022-06-08 22:01:25 2022-06-09 10:01:22 2022-06-09 22:00:59 2022-06-10 10:01:28 a 301 300 302 303 301 400 412 b 200 220 235 238 208 300 302 Can I color a cell based on the increment rate from the previous value? for instance- if the value increased by 10%, it will be yellow, 20% would be Orange and so on.  I'm looking for a solution based on simple xml where no additional files are needed. Thanks.
Customers are at the center of everything we do at Splunk and security is our top priority. On top of a series of new and improved capabilities, Splunk platform now ships with an improved security po... See more...
Customers are at the center of everything we do at Splunk and security is our top priority. On top of a series of new and improved capabilities, Splunk platform now ships with an improved security posture. Splunk is committed to helping customers identify and remediate security issues, including providing advisories and patches for any vulnerability considered “Critical” or “High”. Splunk remains hyper-vigilant and focused on outstanding customer experiences and outcomes. So what are the updates and how can you learn more? Splunk Enterprise 9.0 specifically includes three new security features, a series of automatically implemented security settings, and addresses eight security vulnerabilities with fixes that go deeper than just "patching" Splunk, focused on intra-Splunk transport layer security (TLS), securing deployment server & forwarders, securing app development, and added search processing language (SPL) safeguards. New or Enhanced Security Features: *New* Splunk Assist - a single place to monitor your Splunk Enterprise deployment and see recommendations to improve your security posture. *Enhanced* Upgrade Readiness App -  Easily identify apps impacted by Python 3.0 certificate validation and access step by step upgrade guidance. *New* Smart Card Authentication - Splunk will now support multi-factor authentication (MFA) Common Access Card cryptographic certificates issued by the US DoD as a form of authentication natively. Automatically Implemented Security Updates Settings Improved dashboard security with sanitization on input fields  Increased control for admins with greater Splunkd Roles and Capabilities restriction options Enhancements to third-party packages including Node.js, OpenSSL, and many more Easier risk management with user-friendly SPL safeguards Role-based filtering Security Advisories to Take Action On (for June 2022) Intra-Splunk Transport Layer Security (TLS) is a best practice to implement for any application that communicates over the public network. The three available updates in Splunk Enterprise 9.0 enable you to configure TLS at the right time for your business (ISVC-2022-0602, SVD-2022-0603, SVD-2022-0606). Securing forwarders and deployment servers is important to the security of your overall Splunk deployment. These three updates enable you to apply stronger security at the forwarder (SVD-2022-0605, SVD-2022-0607) as well as between the deployment server and forwarder (SVD-2022-0608). Securing the app model is important to the security of your overall Splunk deployment as apps are considered part of your trust boundary. This update enables you to apply stronger security between the application and your Splunk system (SVD-2022-0601). Splunk search processing language (SPL) is powerful, with great power comes great responsibility. This update makes it possible to more fully safeguard against edge case scenarios for data exfiltration and arbitrary code execution using SPL (SVD-2022-0604). We encourage you to prioritize security by addressing these advisories and upgrade to Splunk Enterprise 9.0 today! For more details and guidance on next steps, please watch our Tech Talk - Improve Your Security Posture, explore Splunk Lantern for upgrade assistance, view our FAQ, and subscribe to our Product Security Page to get timely updates on all of our advisories. Still have questions? Please ask any technical clarifications on our Answers forum. Judith - Platform Product Marketing Manager
Hi Community,   I have a dashboard that gives me an overview of the details. When I click on one of the rows it drives me to a different dashboard which takes time from this dashboard and ... See more...
Hi Community,   I have a dashboard that gives me an overview of the details. When I click on one of the rows it drives me to a different dashboard which takes time from this dashboard and performs a granular search within time limits based on parent ID. This search performs a search on a panel that shows no data at all. When I try to look at the SPL of the empty dashboard, I realise that the SPL does search on milliseconds. This search is within the range of 1 second. This search is driven by a data model acceleration which can accelerate only for seconds. So if the change the time range for more than a second I get the desired results.   To fix this issue, the only option I can think of is reconstructing the SPL without data models but that will slow down the search or manipulate the time range so that I can get the data. Is there some other option which I can use to get the desired results? Thanks in advance.   Regards, Pravin      
Hello, First of all, sorry for my lack of knowledge if my question looks silly.   I have a datasource providing events as follows : State Start Timestamp / UserName / StateName / Duration of ... See more...
Hello, First of all, sorry for my lack of knowledge if my question looks silly.   I have a datasource providing events as follows : State Start Timestamp / UserName / StateName / Duration of the State in seconds / State End Timestamp   I'm trying to produce a timechart that is showing the duration in each state for each user with a 1h span so that we could see clearly the time spent in each state by the users for each hour of the day.   The issue is that a user can start a state at a given time and have a duration bigger than 1h.   For exemple, a user logs in and is available at 8:32 and it stays in "available" state during 2h.   What I get so far with a basic timechart span=1h of the states by user : 2h in 8h span nothing in 9h span nothing in 10h span   I would need to manipulate the query or the events in a way that will make the timechart report in this example : 28 min in 8h span 1 hour in 9h span 32 min in 10h span   as the state lasted between 8h32 and 10h32.   Here's my query today :   | eval AvailableDuration = if(State="Available",Duration,0) | eval BusyDuration = if(State="Busy",Duration,0) | eval CallDuration = if(State="In Call",Duration,0) | timechart span=1h fixedrange=false useother=f limit=0 sum(CallDuration) as "In call" sum(AvailableDuration) as "available" sum(BusyDuration) as "Busy" by UserName     Is there a way to redistribute the durations by manipulating data so that each hourly span is properly populated ?   Thanks in advance for your help !
Hi, Say I have this table: Name Date Flows a 2022-06-13 23:01:26 200 a 2022-06-13 10:01:26 301 b 2022-06-13 23:01:26 504 b 2022-06-13 10:01:26 45... See more...
Hi, Say I have this table: Name Date Flows a 2022-06-13 23:01:26 200 a 2022-06-13 10:01:26 301 b 2022-06-13 23:01:26 504 b 2022-06-13 10:01:26 454   I'd like to create a table that's using the values of "Date" column as a new columns, and grouping all the identical "Name" values into one line as follows (where the values are "Flows"): Name 2022-06-13 23:01:26 2022-06-13 10:01:26 a 200 301 b 504 454   I tried several approaches but failed. Could you assist?
Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOME... See more...
Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOMER ID], a.[NAME]    
I have monitored 2 applications (one Java app and one node.js app) in Appdynamics, but when trying to query the applications, it returned NO applications. Appreciate your advice on what might be wron... See more...
I have monitored 2 applications (one Java app and one node.js app) in Appdynamics, but when trying to query the applications, it returned NO applications. Appreciate your advice on what might be wrong. Thank you. I also tried to copy the REST URL Path from the metrics browser: https://<>.saas.appdynamics.com/controller/rest/applications/newhello/metric-data?metric-path=Overall%20Application%20Performance%7CCalls%20per%20Minute&time-range-type=BEFORE_NOW&duration-in-mins=720 And the return shows: <body> <h1>HTTP Status 400 - Invalid application id newhello is specified</h1> <hr /> <p><b>type</b> Status report</p> <p><b>message</b>Invalid application id newhello is specified</p> <p><b>description</b>The request sent by the client was syntactically incorrect.</p> <hr /> <h3>AppDynamics</h3> </body>   ^ Post edited by @Ryan.Paredez for minor changes to the title to make it more searchable
How can we subtract dequeue count of now with an hour before dequeue count number to monitor queues are progressing? Log lines are like these: 2022/06/14 09:19:02 queue=ABC size=0 consumerCount=2... See more...
How can we subtract dequeue count of now with an hour before dequeue count number to monitor queues are progressing? Log lines are like these: 2022/06/14 09:19:02 queue=ABC size=0 consumerCount=2 enqueueCount=12907 dequeueCount=15466 2022/06/14 08:19:02 queue=ABC size=0 consumerCount=2 enqueueCount=12850 dequeueCount=15409 looking to do subtraction of dequeueCount (15466-15409) and setup an alert if it equals to 0 Thanks
When using DBXQUERY, is the a |search needed after the query?       | dbxquery connection="DRTP-Connection" query=" SELECT <X, Y, Z> FROM <wherever> ... See more...
When using DBXQUERY, is the a |search needed after the query?       | dbxquery connection="DRTP-Connection" query=" SELECT <X, Y, Z> FROM <wherever> WHERE <wherever>;" | search | table X, Y, Z | sort - Z        My query appears to work either way, but I have often been critical about using extra 'search' commands within a query, so I am curious as to whether there is a performance hit.
Hello everyone, I'm trying to connect my cluster pods logs to splunk with collectorforkubernetes when i lauch the yml config file for the collector pods seems to be running well but i don't get an... See more...
Hello everyone, I'm trying to connect my cluster pods logs to splunk with collectorforkubernetes when i lauch the yml config file for the collector pods seems to be running well but i don't get any data in Splunk, by running kubectl logs on my pods i have the following error : Why is the connection refused ? Is there something to do with my indexes ? Thanks,
Hi, We are looking to install the Appdynamics on a stand-alone SAP Hana database that is not attached to an SAP ECC system. I'm looking for proper step by step procedure to get this installation don... See more...
Hi, We are looking to install the Appdynamics on a stand-alone SAP Hana database that is not attached to an SAP ECC system. I'm looking for proper step by step procedure to get this installation done. If anyone has a clue on how to configure AppD monitoring on SAP HANA, request you to share the procedure. Thanks in advance. ~Azhar
OLD Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count... See more...
OLD Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count by client_isp,_time | where count >= 600 | convert timeformat="%m/%d/%Y %H:%M:%S" ctime(_time) | sort - count | transpose header_field=_time When I set this one as an alert, it considers the client_isp and _time as part of the query so even there where no result it is sending a blank alert only the client_isp and time on the first column. New Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count by client_isp,_time | transpose header_field=_time | sort - count | where count >= 600 | convert timeformat="%m/%d/%Y %H:%M:%S" ctime(_time) While on this one, there were no result at all. What maybe wrong on this query?
Hello, I want to notify that some bug in version 22.6 , on controller can get information of CPU, RAM,... from AIX server.  Maybe something Script is wrong( same in the below pictute " ksh run_co: no... See more...
Hello, I want to notify that some bug in version 22.6 , on controller can get information of CPU, RAM,... from AIX server.  Maybe something Script is wrong( same in the below pictute " ksh run_co: not found").. i fixed it by using 22.2 version for machine-agent... 
Hi All Any help or suggestion? we are confronted with this problem with does not stop. we have already tried to do the following but without result: 1) We have optimized the  performance on the... See more...
Hi All Any help or suggestion? we are confronted with this problem with does not stop. we have already tried to do the following but without result: 1) We have optimized the  performance on the Stream sensor already.  2) we switched from the disk buffering to memory buffering for the syslog receiver But the same issue   Any idea please Cheers Auguste  
Hi, I have following splunk query. | dbxquery connection="FFconed_feTenant" query="select count(file_name) as file_count, DATE_FORMAT(created_at,\"%m/%d/%y %W\") as date from ida_files_inventory... See more...
Hi, I have following splunk query. | dbxquery connection="FFconed_feTenant" query="select count(file_name) as file_count, DATE_FORMAT(created_at,\"%m/%d/%y %W\") as date from ida_files_inventory where created_at > Date_sub(Curdate(), INTERVAL 7 Day) and created_at < Curdate() group by DATE_FORMAT(created_at,\"%m/%d/%y %W\")" It gives me the per-day count of files received in last 7 days along with the date. The result is as follows. date                                              file_count 06/07/22 Tuesday                79 06/08/22 Wednesday         46 06/09/22 Thursday              57 06/10/22 Friday                     5 06/11/22 Saturday               5 06/12/22 Sunday                  227 06/13/22 Monday                187 I want to calculate the running averages of file_counts for all these days.  For e.g. for 1st day, running average is 79/1 = 79 for 2nd day, running average is 79+46/2 = 62.5 for 3rd day, running average is 79+46+57/3 = 60.67 and so on. For this I want to write a query. Please help me with this.  
Does each search head cluster need it's own dedicated Deployer server. For example if we have a three server search head cluster called Search Head Cluster A with a Deployer server can we use that sa... See more...
Does each search head cluster need it's own dedicated Deployer server. For example if we have a three server search head cluster called Search Head Cluster A with a Deployer server can we use that same Deployer server for another three server cluster, say Search Head Cluster B
My predecessor installed a Search Head cluster and used the Index Cluster Master as the Deployer server for the Search Head cluster, I would like to move that role to a separate machine, can that be ... See more...
My predecessor installed a Search Head cluster and used the Index Cluster Master as the Deployer server for the Search Head cluster, I would like to move that role to a separate machine, can that be done without having to rebuild the Search Head cluster?