All Topics

Top

All Topics

Hi, we have a cluster environment with 6 indexers. Each host has 128GB Ram, but as I see Splunk using only ~4GB. Is there any chance to optimize (speedup) memory usage and let splunk to use for exa... See more...
Hi, we have a cluster environment with 6 indexers. Each host has 128GB Ram, but as I see Splunk using only ~4GB. Is there any chance to optimize (speedup) memory usage and let splunk to use for example 100GB of RAM ?   I have tons of different indexers with different dashboards .... If yes, how to do it ?     Cheers Konrad
Hi How can I find events that not occurred daily? Here is the scenario  I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. every... See more...
Hi How can I find events that not occurred daily? Here is the scenario  I have two field on my logfile <servername> <CLOSESESSION> need to know when CLOSESESSION is 0 each day by servername. everyday I expect CLOSESESSION appear on my server logs, if one or more server has no CLOSESESSION it means something going wrong. need two search here, first extract all server names from file name that exist in path from metadata for faster result, then in second query check which one has not CLOSESESSION FYI: I don’t like to use lookup in csv file for first step, prefer do it with multi search and join table. something like this: 1- first search return list of all log files exist (per server) | metadata type=sources index=my_index | table source 2-second search filter lines contain CLOSESESSION index="my_index" | search CLOSESESSION | rex extracted server names of field "source" from STEP 1 | rex extract count of CLOSESESSION  join them and just show those hasn’t CLOSESESSION   here is the logs: servernames not exist in log, extract from log file name, i put it in log with different color for clear the main goal) 23:54:00.957 app server 1 module: CLOSESESSION 23:54:00.958 app server 3 module: CLOSESESSION 23:54:00.959 app server 4 module: CLOSESESSION   Expected output step 1: servernames server 1 server 2 server 3 server 4   Expected output step 2: Servername     cause Server2               NOCLOSESESSION
I am getting this error in deployment server 8.0.9 and how to make this right : ./splunk reload deploy-server -class (i gave my serverclass name here) An error occurred: Could not create Splunk set... See more...
I am getting this error in deployment server 8.0.9 and how to make this right : ./splunk reload deploy-server -class (i gave my serverclass name here) An error occurred: Could not create Splunk settings directory at '/root/.splunk'.  
Hello, I read my data with the inputlookup command and try to count the different occurrences of the field fields.SID as below:         | makeresults | eval time=relative_time(now(),"-24h") | e... See more...
Hello, I read my data with the inputlookup command and try to count the different occurrences of the field fields.SID as below:         | makeresults | eval time=relative_time(now(),"-24h") | eval time=ceil(time) | table time | map [ |inputlookup incidents where alert_time > $time$ ] | join incident_id [ |inputlookup incident_results ] | fields fields.SID | search fields.SID=* | mvexpand fields.SID           Unfortunately, whatever tricks I do I am always getting several SIDs packed into a single event, see the screenshot below.  How would I split it the way to have each fields.SID in separate row to be able to count it? Kind Regards, Kamil
Hi, We have status in one log type, where we would like to track if account is in state: bypassed Example: 2021-13-10 user1 bypassed 2021-13-10 user2 enabled 2021-13-09 user2 bypassed 2021-13... See more...
Hi, We have status in one log type, where we would like to track if account is in state: bypassed Example: 2021-13-10 user1 bypassed 2021-13-10 user2 enabled 2021-13-09 user2 bypassed 2021-13-08 user3 bypassed 2021-13-08 user3 active 2021-13-08 user3 bypassed 2021-13-07 user3 active how can we find last 2 status for user in period of time and than based on last bypass/active status we get only accounts that have still active bypass status?  
Hi All, I'm trying to create a search, to potentially be made into a monitoring rule later on. What I am trying to achieve is a way to compare if a user has logged into his machine from a wildly di... See more...
Hi All, I'm trying to create a search, to potentially be made into a monitoring rule later on. What I am trying to achieve is a way to compare if a user has logged into his machine from a wildly different IP address.  This will be using external IP addresses only. As an example I want to know if a user logged into the estate from an IP which wasn't the same or similar as the previous day.   User Today Yesterday User A 155.123.1.1 155.123.1.1 User B 155.124.1.2 155.125.20.2 User C 155.166.2.5 22.18.254.56   In the table able, I have 3 users, user A and B have logged into pretty similar IP's although user B has logged in from a different one today ( this often happens in our logs ).  What I am more wanting to see is User C, who has logged into from a completely subnet IP and is not similar to their IP from the previous day.  This is what I have so far:   index=foo (earliest=-1d@d latest=now()) | eval TempClientIP=split(ForwardedClientIpAddress,",") | eval ClientIP=mvindex(TempClientIP,0) | eval ClientIP1=mvindex(TempClientIP,1) | eval ClientIP2=mvindex(TempClientIP,2) | search NOT ClientIP=10.* | where LIKE("ClientIP","ClientIP") | eval when=if(_time<=relative_time(now(), "@d"), "Yesterday", "Today") | chart values(ClientIP) over user by when | where Yesterday!=Today     Some context regarding the search the ForwardedClientIpAddress field has 3 items inside, ClientIP + ClientIP1 are the same address, ClientIP2 is the end internal address. ClientIP can be an internal address, which is why there is a NOT to remove it from the searches.   Any help would be very much appreciated.    Thanks
Hi Guys, We have a requirement where we need to index emails  to be ingested into splunk. I know a couple of apps are out there but I could not get them working...also not sure how to setup/request ... See more...
Hi Guys, We have a requirement where we need to index emails  to be ingested into splunk. I know a couple of apps are out there but I could not get them working...also not sure how to setup/request a mail account for splunk specifically for this purpose like what all settings should be applied etc.  I am a novice as far as mail settings are concerned, so can someone take some time and help me out here and be as detailed as possible...We are using Splunk 8.0.0   Thanks, Neerav
Hi All, I have created a bar chart on my dashboard with the count of Exceptions. Now I want to create a drilldown to a separate dashboard whenever I click on any of the bars (Separate dashboards for... See more...
Hi All, I have created a bar chart on my dashboard with the count of Exceptions. Now I want to create a drilldown to a separate dashboard whenever I click on any of the bars (Separate dashboards for each bars). Can we achieve such drilldown from bar/column charts? I tried "Link it a dashboard", "Link to a custom URL" etc. but I takes me to only one dashboard whenever I click on any of the bars in the chart. Also thought of using "Manage tokens on this dashboard" option but that doesn't take me to a new dashboard as it only enables for in-page drilldown actions only. Please help suggest a way to get my desired output.    Thank you..!!
I used this eval statement with AND conditions but I'm only getting result as "Public" even when the condition satisfies for value "Private" i.e. I'm only getting default result. Any idea of what's w... See more...
I used this eval statement with AND conditions but I'm only getting result as "Public" even when the condition satisfies for value "Private" i.e. I'm only getting default result. Any idea of what's wrong with this statement? | eval perm=case(block_public_acls=true AND block_public_policy=true AND ignore_public_acls=true AND restrict_public_buckets=true,"Private",1=1,"Public")  
Hello, I'm using the trial version  (60 days) of Splunk  Version:8.2.2.1 which I installed few days ago on my windows machine I changed the licensing to Heavy forwarder, but I try to a search I got... See more...
Hello, I'm using the trial version  (60 days) of Splunk  Version:8.2.2.1 which I installed few days ago on my windows machine I changed the licensing to Heavy forwarder, but I try to a search I got this error Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. My  APP Data is local performance monitoring ( Processor : Select counter C1 Time, User Time DPC Rate, and selected instances 0 1 2 3  total ) I can not go back the trial entreprise license I don't exceed the trial period. is it possible to rollback this configuration and go back to trial enterprise license ? Thanks
While configuring JMX-based extensions, you may sometimes face connectivity issues to your application server which is exposed via JMX. You may also want to verify a metric value reported by extensio... See more...
While configuring JMX-based extensions, you may sometimes face connectivity issues to your application server which is exposed via JMX. You may also want to verify a metric value reported by extensions against the value reported in JConsole for the same attribute and MBeans.  To perform any of these tasks, see the detailed description provided in the How do I use JConsole to test and troubleshoot connectivity Knowledge Base article. The recommended practice is to run the JConsole on the same server where you are running the machine agent with a JMX-based monitor, and using the same user which is running the agent.
The Controller has various metric processing qualifiers with regard to aggregation, time roll-up, and cluster roll-up, for processing a metric. These qualifiers are configured for each metric that an... See more...
The Controller has various metric processing qualifiers with regard to aggregation, time roll-up, and cluster roll-up, for processing a metric. These qualifiers are configured for each metric that an extension reports. Please refer to the Build a monitoring extension using Java documentation for details on various metrics processing qualifiers supported by the Controller. Your extension can have metric configurations in either the config file or the metric file. Contents When my metric configurations are provided in config.yml file When my metric configurations are provided in metrics.xml file When there are no configured qualifiers for metrics in the extension When my metric configurations are provided in the config.yml file Sometimes you will see that an extension has metric configurations in the config.yml file.  Sample configuration for metric configuration in the config file: - name: “numTasks” alias: “Num Tasks” #Number of tasks in the application multiplier: “1" aggregationType: “AVERAGE” timeRollUpType: “AVERAGE” clusterRollUpType: “INDIVIDUAL” delta: “false” You can modify any of the metric processing qualifiers to one of the supported values of its category and metrics will follow the same aggregation and roll-up strategy on the Controller. For additional details on the metric processing qualifiers supported by the Controller, refer to Metric Processing Qualifier in the Extensions and Custom Metrics section of the documentation. Back to Contents When my metric configurations are provided in the metrics.xml file Some of the extensions support the processing of metrics from the metrics.xml file.  Sample configuration in metrics.xml file: <metric attr="Received" alias="Received" aggregationType = "AVERAGE" timeRollUpType = "AVERAGE" clusterRollUpType = "INDIVIDUAL"/> In the metrics.xml file, you can modify any of these aggregation or roll-up parameters to supported value, and the same will be followed on the Controller. When there are no configured qualifiers for metrics in the extension In some cases, you may see that there are no qualifiers configured for metrics in the extension.  In such a case, default values are assigned to these qualifiers:  Qualifier Assigned Default Value “aggregationType” “AVERAGE” “timeRollUpType” “AVERAGE” “clusterRollUpType” “INDIVIDUAL” Back to Contents
Troubleshooting various kinds of AWS extensions issues Following are some of the troubleshooting steps for AWS extensions: Contents How do I troubleshoot connectivity issues for AWS extensio... See more...
Troubleshooting various kinds of AWS extensions issues Following are some of the troubleshooting steps for AWS extensions: Contents How do I troubleshoot connectivity issues for AWS extensions? How do monitoring type settings affect AWS extensions? What should I consider about metrics querying window settings? Where can I find advanced extension configuration and troubleshooting information? Additional resources How do I troubleshoot connectivity issues for AWS extensions? If you are facing connectivity issues for your AWS extensions, then you can try executing the “get-metric-statistics” call from the same host using AWS CLI.  You can install AWS CLI on the host where your Machine Agent with AWS extension is running. Once you have installed and configured AWS CLI, you can execute the “get-metric-statistics” call to test connectivity. For AWS CLI installation details, follow Installing, updating, and uninstalling the AWS CLI documentation. For details on how to execute the “get-metric-statistics” call from the CLI, refer to AWS’ Get metric statistics documentation. NOTE | that AWS extension does not support Assume roles-based authentication. How do monitoring type settings affect AWS extensions? Verify that type of monitoring enabled on cloudwatch for a service is the same monitoring type enabled in your AWS extension. You can enable “Basic” or “Detailed” monitoring by configuring the following flag in the extension's config file. cloudWatchMonitoring: "Basic" “Basic” monitoring will fire CloudWatch API calls every 5 minutes. On the other hand, “Detailed” monitoring will fire CloudWatch API calls every 1 minute. By default, “Basic” monitoring is enabled for all AWS extensions. What should I consider about metrics querying window settings? Sometimes after providing all configurations correctly, you may observe that metrics for AWS extension are still not visible on the controller.  One reason for this could be because there is no metrics data available in AWS cloudwatch during the configured time range. The “metricsTimeRange” property defines the “startTimeInMinsBeforeNow” and “endTimeInMinsBeforeNow” and can be modified depending on when the data is available in cloudwatch. metricsTimeRange:       startTimeInMinsBeforeNow: 10       endTimeInMinsBeforeNow: 0 The “startTimeInMinsBeforeNow” cannot be less than the “endTimeInMinsBeforeNow” or the extension will generate errors. Where can I find advanced extension configuration and troubleshooting information? The Advance extension configuration and troubleshooting Knowledge Base article contains detailed information related to features provided by AppDynamics Extensions. It can help you troubleshoot your extension and become more informed on how to use different functionalities provided through the various groups of extensions.  NOTE | Not all features are available for every extension. 
Hi guys... I have a splunk forwarder instance v8.2.1 on a AIX server. I have a custom app configured on which I am monitoring a few logs and forwarding them to an indexer.  I am having a weird probl... See more...
Hi guys... I have a splunk forwarder instance v8.2.1 on a AIX server. I have a custom app configured on which I am monitoring a few logs and forwarding them to an indexer.  I am having a weird problem where the forwarder stops sending data every day at 1 PM and resumes sending data feed at 1 AM. So, I would have no data consumed between 1 PM to 1AM. Any suggestions on what could be the issue ?  However, I am also forwarding splunkd.log to the same indexers and I see that log data all thru the day. The issue I am facing is only with one of the custom app I have on this instance.  I am sharing inputs.conf and props.conf entries  ========== inputs.conf ========= [monitor:///log/mycustomereport/mycustomereport.log*] disabled = false followTail = 0 sourcetype =mycustomereport blacklist = \.gz index = 20000_java_app_idx ignoreOlderThan=2h ========== props.conf ========= [mycustomereport] TIME_PREFIX=\w+\| TIME_FORMAT=%m/%d/%Y %I:%M:%S %3Q %p TRUNCATE = 0 MAX_EVENTS = 10000 SHOULD_LINEMERGE = false KV_MODE = none LINE_BREAKER = ([\n\r]+)mycustomereport MAX_TIMESTAMP_LOOKAHEAD = 40 PS: I do see that log file I am monitoring is having data written to it consistently.  I did enable debug logs... i dont see anything written which could helped me understand the issue. I also dont see any crash file generated. 
Pretty much the title. I have created alerts using IT Essentials Learn app. The alert is running because I receive alerts to slack. However, I cannot figure out where the alert is housed so I am unab... See more...
Pretty much the title. I have created alerts using IT Essentials Learn app. The alert is running because I receive alerts to slack. However, I cannot figure out where the alert is housed so I am unable to return to the edit screen and modify the alert. I've looked through both IT apps as well as the Search and Reporting app alerts panel. I cannot find the alerts anywhere. Where are they housed?
How can I avoid extension errors due to unconfigured (or incorrectly configured) health check parameters? The Java SDK for AppDynamics Extensions runs a few checks to validate the Application and ... See more...
How can I avoid extension errors due to unconfigured (or incorrectly configured) health check parameters? The Java SDK for AppDynamics Extensions runs a few checks to validate the Application and Machine Agent configuration. However, there are few errors (like too many open files, high cpu usage, or huge agent log file sizes) that could arise if correct configurations for these health check parameters are not provided in the extension’s config.  Following are some exceptions related to health checks which are reported in Machine Agent logs. Exception when running   com.appdynamics.extensions.checks.ExtensionPathConfigCheck@58cd7f75 com.appdynamics.extensions.checks.MachineAgentAvailabilityCheck@754f1f01 Therefore, we recommend keeping the "enableHealthChecks" flag disabled from the config.yml file. Disabling health checks will not obligate metrics collecting and reporting by the extension. To disable health checks, you can configure the flag below and set it to “false” in the config file: enableHealthChecks: false If the enableHealthChecks flag is not configured by default in the extension's config, then add the flag at root level, in a new line without any spaces. Find more details on extensions health checks in the What extension Health Checks are useful for debugging? Knowledge Base article.
The error says "Threat list download from  https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json Can not be downloaded. I have contacted the vendor of the App.... See more...
The error says "Threat list download from  https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json Can not be downloaded. I have contacted the vendor of the App. few times, No go! Please advise.  
Hi There,   I have two queries [Query 1  and Query 2].  what i am planning to achieve is that when user clicks on the server_ID for tabular output of Query 1, then it should be passed as INPUT to t... See more...
Hi There,   I have two queries [Query 1  and Query 2].  what i am planning to achieve is that when user clicks on the server_ID for tabular output of Query 1, then it should be passed as INPUT to the WHERE clause in Query 2  . Any help would be appreciated.     Query 1: index=<<index_name>>   sourcetype=webserver | dedup server_ID | table  server_ID   Query 1 Output: server_ID 49552473-567 d5eedf55-dca 5d4bb774-74a 03f03042-1f7   Query 2:   index=<< index_name>>   "Exception" | where  server_ID= "server_ID from Query1 table"     Thank You
I have two searches with  three fields in common but two field that differ. I'm trying to find returns  that don't have a matching sale for that company_name, mid, and card_number. The return and sal... See more...
I have two searches with  three fields in common but two field that differ. I'm trying to find returns  that don't have a matching sale for that company_name, mid, and card_number. The return and sales fields are both dollar amounts.  "total" is the dollar amount of the transaction, return or sale  index=X sourcetype=Y  earliest=-1d@d latest=@d | where transaction_type= "refund" | stats values(total) as returns by company_name, mid, card_number | append [  search index=X sourcetype=Y earliest=-30d@d latest=@d    |  where transaction_type= "sale"    | stats values(total) as sales by company_name, mid, card_number ] Currently I have this search that pulls all return transactions from the past day, as well as every sale from the past month, the results look like this           company.      MID.       card num.        returns   sales +-------------------------------------------------------------------+  | company A | 1234 | 1234***7890 |  50.00 |                  |  | company B | 1254 | 1234***1234 |  80.00 |                  |  | company C | 1236 | 1234***1230 |  75.00 |                  |  | company A | 1234 | 1234***7890 |               | 50.00     |  | company B | 1254 | 1234***1234 |               | 30.00     |  | company C | 1236 | 1234***1230 |               | 75.00     | You can see company B has refunded the card number 1234***1234 for the amount of $80.00,  but there was not a sale to that card in that amount. I would like my search to exclusively display the rows (with the return amount only, not any sale numbers) where this happens. so Ideally the search would have returned just one row:    company.      MID.       card num.        returns  +------------------------------------------------------+  | company B | 1254 | 1234***1234 |  80.00 |                  
Anyone noticed that there is a big difference between what the MC displays in for the hot volume under indexers->indexes and volumes->volume Detail and what the OS reports??  I have our indexers show... See more...
Anyone noticed that there is a big difference between what the MC displays in for the hot volume under indexers->indexes and volumes->volume Detail and what the OS reports??  I have our indexers showing that between 11-12 TBs are being used up for the hot volume, while the MC is reporting that they are not even at 10 TBs.  That seems like a really big difference to me...  I can deal with checking each box, but I like have the MC reporting where I can see all the indexers in one panel.  Thanks!