All Topics

Top

All Topics

Hi Community, We are planning to collect data for DB2 and IBM WAS components in our infrastructure. We are expecting below metrics to be collected from those components so can any suggest how can b... See more...
Hi Community, We are planning to collect data for DB2 and IBM WAS components in our infrastructure. We are expecting below metrics to be collected from those components so can any suggest how can be achieved. Queue Size WAS profile performance statistics WAS cluster status Calls Per Second Erroneous Call Rate Latency Top Endpoints Top Statements Client Connections Queries DB Status Last Backup Table Space Utilities Rows Commits/Rollbacks Top queries running more than 100 seconds Lock Statistics (Memory) Lock Statistics Transaction Performance Thank you in advance! Regards, Eshwar
Hi Splunker's I am using "Splunk Add-on for AWS" and trying to fetch metrics from CloudWatch. Currently stuck on an implementation where the metric I want to pull doesn't have any Dimension to it ... See more...
Hi Splunker's I am using "Splunk Add-on for AWS" and trying to fetch metrics from CloudWatch. Currently stuck on an implementation where the metric I want to pull doesn't have any Dimension to it (Metrics with no dimensions as what its termed on CloudWatch) As in the image above when I put Dimensions block as empty it doesn't allow to save the entry. I tried other combination as well "[{}]" which would mean blank data but even this didn't work Would really appreciate some help on how I can fetch these metrics with no dimensions or if it is really possible to fetch this data at all over Splunk using Add-on. Cheers!
Hello for everyone! I have an UF installed on an MS Exchange server that sends data to the indexer layer Search actions performed on SearchHeads All events in the IIS file log (MS Exchange) look... See more...
Hello for everyone! I have an UF installed on an MS Exchange server that sends data to the indexer layer Search actions performed on SearchHeads All events in the IIS file log (MS Exchange) look like this: 2023-08-22 11:16:36 172.25.57.29 POST bla bla bla... As you see, a timestamp doesn't have any data about timezone, and on SearchHeads, I see that events are older by 3 hours than I expected to see I read some questions and documentation about how to adjust the TImeZone and tried to set up props.conf on the UF by setting "TZ = UTC" Also, I tried another variation, but timestamps didn't change Another way that I tried is to use "EVAL-_time = _time + 10800" But this attempt failed too I think that it is a really common problem, but maybe I missed something and can't solve it. Can anyone help me with this stupid question?
Need help in creating a query to get the result from one sourcetype and get other field values based on the output from the first sourcetype. For. e.g I am having the below output which shows the t... See more...
Need help in creating a query to get the result from one sourcetype and get other field values based on the output from the first sourcetype. For. e.g I am having the below output which shows the transaction_id as a result but the username corresponding to that transaction ID should be fetched from another sourcetype. Result of query should be -  _time hostname transaction_id username city 1:30AM server1 TEST cron_user US 1:31AM server2 TEST1 cron2_user CA   In above transaction_id is the field present in both sourcetype hostname and transaction_id is coming from one sourcetype. matching with specfific transaction_id , username and city should be fetched from sourcetype2  
I have a splunk query shown below.       basesearch | stats avg(time) as executionTime by method       which results in table like below       method executionTi... See more...
I have a splunk query shown below.       basesearch | stats avg(time) as executionTime by method       which results in table like below       method executionTime A 110.350 B 90.150       I want to obtain executionTime difference between method A and B in a table result A-B = 20.20 Please help me with splunk query to get the same. Thanks in advance!
Hi Team, I would like to achieve something similar to below 1- I have a csv lookup table name - customer-devices.csv having below two columns  hostname DeviceType hostname1 Cisco hostna... See more...
Hi Team, I would like to achieve something similar to below 1- I have a csv lookup table name - customer-devices.csv having below two columns  hostname DeviceType hostname1 Cisco hostname2 Cisco hostname3 Cisco   2- I am searching events having above hostname field for past 24 hr. My requirement it should print all hostnames in the output result which are there in the lookup and if those hostname are also in the search mark them Active if those are not there in search mark them Not Active. Like it print all three hostname which is there in lookup having status Active and Non Active basis on its availablility in the search log   hostname DeviceType Status hostname1 Cisco Active hostname2 Cisco Active hostname3 Cisco Not Active
I'm trying to install Splunk Enterprise version 9.1.2, which rolls back before finishing. I realized it goes almost until the end before starting to roll back since the folder /program files/Splunk s... See more...
I'm trying to install Splunk Enterprise version 9.1.2, which rolls back before finishing. I realized it goes almost until the end before starting to roll back since the folder /program files/Splunk size was increased to about Splunk's total size. Here is the log file. log 
Dear Team,   Need your expert advice. a. Indexer Cluster - Is it feasible to separate the replication of indexer peers among the peers themselves? Illustration: Within the indexing cluster, t... See more...
Dear Team,   Need your expert advice. a. Indexer Cluster - Is it feasible to separate the replication of indexer peers among the peers themselves? Illustration: Within the indexing cluster, the Master is associated with indexer peers A, B, C, and D. My aim is to ensure that the development data being forwarded to A is exclusively replicated with B. Likewise, the production logs should only be replicated among C and D. It's essential that the data between C and D, as well as A and B, remain non-replicated. Is this solution attainable? Br, Prasad V
I'm trying to set query on dashboard as metric, if data looks like below I'm expecting it to show aggregate count at 07:00 am I have already set query as metric, it fetches count of data from last o... See more...
I'm trying to set query on dashboard as metric, if data looks like below I'm expecting it to show aggregate count at 07:00 am I have already set query as metric, it fetches count of data from last one hour also schedule on Metrics is set to 1 hour On "Set Metric" window, I have selected it as Single Metric and Value Data as below Time             Count 05:00 am       10 06:00 am        20 07:00 am        30 With current set up I'm not getting aggregate count that is in the database at 06:00 or 07:00 am. could someone help please ?
I read from https://docs.splunk.com/Documentation/Splunk/9.0.4/SearchReference/Iplocation :      The iplocation command is a distributable streaming command, which means that it can be processe... See more...
I read from https://docs.splunk.com/Documentation/Splunk/9.0.4/SearchReference/Iplocation :      The iplocation command is a distributable streaming command, which means that it can be processed on the indexers.     My goal is to sync out a  fetched database from maxmind (GeoLite2-City edition) twice per week to ~/share/GeoLite2-City-custom.mmdb and set this in ~/etc/system/local/limits.conf:     [iplocation] db_path = /opt/splunk/share/GeoLite2-custom.mmdb     My main concern is how to deal with this later, when our sync script fetches a new database and syncs it to the SHC/IDXC. For instance, we have a savedsearch scheduled with cron_schedule = */1 * * * * (Run every minute) that utilizes the iplocation command. We might encounter an issue where the search is run during those seconds the file is being transferred.  Any recommendations on how to deal with this? Any way to have the scheduled search run every minute of every day except on Wednesdays and Saturdays between 03:05 and 03:10?
Each call in my own application contains a unique identifier. Want to list down all the current calls which are running for more than 100 seconds in the system.
Hi All,    I wanted to onboard new device in Spunk which is sangfor firewall my question is how can I onboard it so that it also became a CIM compliant   My basic understand is    Team wi... See more...
Hi All,    I wanted to onboard new device in Spunk which is sangfor firewall my question is how can I onboard it so that it also became a CIM compliant   My basic understand is    Team will configure syslog to sent logs to our syslog  From syslog -> UF -> IDX -> SH    I believe in idx I need to define the input.conf file for new FW now my question is does sangfor has any add-on (like paloalto which curve the data itself in proper name tag everything) if it has Can anyone please help me with the link and where I need to install this addon in search head or idx or UF to make my data CIM compliant.   Thanks
Hi, Hope you'll are having a great day! Coming to the question: How can I install Python libraries for usage in scripts under an app? Basically, I have created an external lookup script, and it re... See more...
Hi, Hope you'll are having a great day! Coming to the question: How can I install Python libraries for usage in scripts under an app? Basically, I have created an external lookup script, and it requires the following modules: msgraph-sdk, azure-identity I easily installed them for testing on my local system by using pip. Now I want to make it work on Splunk. I researched a bit, and pretty much all the solutions mainly say to create a lib folder under etc/apps/<appName>/lib And then copy and paste the external library folder in there. The thing is, these libraries have a lot of dependencies with them, which get auto-installed when we do it with pip install. So my question is, does there exist a more sophisticated and straightforward way to install these large libraries without copy-pasting potentially 100s of libraries? Any help would be appreciated! Thank youu, Best, Jay
We're trying to set up some searches/alerts when someone makes a change to mailboxes on Exchange Online. I'm still learning SPL, but I'm having some issues with this particular one. Splunk gets the... See more...
We're trying to set up some searches/alerts when someone makes a change to mailboxes on Exchange Online. I'm still learning SPL, but I'm having some issues with this particular one. Splunk gets the log data from 365 correctly, but it returns a list of 4 dictionaries  to identify the changes me   "Parameters": [{"Name": "Identity", "Value": "valuea"}, {"Name": "User", "Value": "valueb"}, {"Name": "AccessRights", "Value": "valuec"}, {"Name": "InheritanceType", "Value": "valued"}]     The search from the app is below, and it just spits out all 4 names/values - but how would I reference them individually? Mainly I just want to do that so I can make nicer looking alerts and dashboards with that data.   `m365_default_index` sourcetype="o365:management:activity" Workload=Exchange Operation=*permission* NOT UserId = "*Microsoft.Exchange.ServiceHost*" | table CreationTime Operation ObjectId Parameters{}.Name Parameters{}.Value UserId | rename ObjectId AS Object Parameters{}.Name AS Parameter Parameters{}.Value AS "Value" UserId AS "Modified By"    
Under SOAR version 6.1.0.131, I configured LDAP authentication.  When I click "test authentication" it says Connection Successful.  But when I place a test User and/or test Group it states "Test Auth... See more...
Under SOAR version 6.1.0.131, I configured LDAP authentication.  When I click "test authentication" it says Connection Successful.  But when I place a test User and/or test Group it states "Test Authentication Fails".  And when I try to create a user and choose LDAP, it says "Unable to locate user, please check LDAP configuration".  Going around and around.  
Hello all, I need your help in analyzing my collected log data. I have all of our Windows servers connected in Splunk using the Universal Forwarder. This includes the domain controllers as well. On... See more...
Hello all, I need your help in analyzing my collected log data. I have all of our Windows servers connected in Splunk using the Universal Forwarder. This includes the domain controllers as well. Only the security event log is transmitted. I have installed the Splunk Add-on for Microsoft Windows on the Splunk servers (Indexer, Searchead,). I want to know about failed login attempts, account lockouts, as well as tampering with Local Administrator accounts. If I now start a search query for example on Event ID 4625, I get thousandfold messages with field "host" where my domain controllers are inside. At "host" I want to see the really affected system. For example my Splunk query looks like this: index=Wineventlog sourcetype=wineventlog source::WinEventLog:Security (EventCode=4625 OR EventCode=4740) | eval Benutzerkonto = coalesce(Kontoname, Account_Name) | eval Meldung = coalesce(Fehlerursache, Failure_Reason) | eval IP-Quelladresse = coalesce(Source_network_address, Quellnetzwerkadresse) | table _time, ComputerName, Benutzerkonto, Meldung, IP-Quelladresse (I merge german and english logentries). I only want to know when someone tries to log in to the domain controller, locks his account there or hijacks the local admin on the domain controller. I do not want to see log entries of affected systems via the domain controllers. Do you have a solution to the problem or even suggestions for improvement? Thanks in advance. Best regards Codyy_Fast
CISCO LEARNING NETWORK | During last month's webinar, Scaling Application Performance Monitoring with Automated Agent Management, Alex Afshar demonstrated how AppDynamics automated agent seamlessly i... See more...
CISCO LEARNING NETWORK | During last month's webinar, Scaling Application Performance Monitoring with Automated Agent Management, Alex Afshar demonstrated how AppDynamics automated agent seamlessly integrates with popular automation tools to scale mass application tasks.  Join Alex again on August 24 to learn how AppDs Automated Agent Management expands its support to other technologies, effortlessly integrating with standard Cloud Native technologies to easily scale and simplify complex tasks like mass installations, customization node name changes, and more. Integrating Automated Agent Management with Cloud Native Technologies Register for the Live Session AMER | Thursday, August 24, 9am PST, 12pm EST About the presenter Alex Afshar is a Customer Success Specialist with the Cisco/FSO team. His background is in Software Architecture, Engineering, and Automation with 20+ years of industry experience delivering software and automation solutions in a variety of domains and industries. Over the last decade, he has focused on the Cloud operating model with hybrid/native cloud development and deployments with infrastructure automation and delivery. Prior to joining Cisco, as a Platform Engineering consultant, Alex helped customers deploy and manage a variety of application architectures at scale through automation pipelines. His main focus since joining Cisco/AppD has been helping customers with their Full Stack Observability journey through effective implementation of their monitoring goals and monitoring at scale using Cisco’s FSO vision and tooling.   Additional Resources Did you miss Alex' July webinar, Scaling Application Performance Monitoring with Automated Agent Management? Be sure to check out the recap on YouTube.
Im running splunk 9.0.5 (with no option to go to 9.1 right away), and I'm trying to do something I've done 1000 times in simple xml. The screenshot below describes it perfectly. I want to take th... See more...
Im running splunk 9.0.5 (with no option to go to 9.1 right away), and I'm trying to do something I've done 1000 times in simple xml. The screenshot below describes it perfectly. I want to take the number in one panel, and multiply it by the number in a seond panel, and I want the answer to be in a third panel.  I can only appear to set drilldown tokens, but I just want to <set token="THISTOKEN">$result.THISFIELD$</set> and another <set token="THATTOKEN">$result.THATFIELD$</set> so that I can ... | eval RESULT=$THISTOKEN$*$THATTOKEN$ I would greatly appreciate anyone who can point me in the right direction to do this in studio....  I feel like I've time warped to Splunk 5.x here...
Hi,   On certain events the indexed time is 24h after the event _time across all indexes on Splunk cloud, just wondered if anyone has seen this before it doesn’t look to matter on the source type t... See more...
Hi,   On certain events the indexed time is 24h after the event _time across all indexes on Splunk cloud, just wondered if anyone has seen this before it doesn’t look to matter on the source type that is used.     thanks,  
Hello, I'm trying to create a working props/transforms to separate standard events from json formatted logs (by filtering/resetting the json logs to their own sourcetype).  Here's what I've tried so... See more...
Hello, I'm trying to create a working props/transforms to separate standard events from json formatted logs (by filtering/resetting the json logs to their own sourcetype).  Here's what I've tried so far and I am able to do most of what I want with the exception of timestamp recognition of the json events..  The below trims my json event headers only and filters/resets the json events to their own separate sourcetype.  Since the header is trimmed splunk is doing a great job auto extracting my json field value pairs.  I'm looking for help on getting the timestamp or _time value to match my json field "log_time".   PROPS.conf       [mainlog] MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_PREFIX = (?=[20])|log_time: SEDCMD-remove-jsonheader = s/^[0-9T\:Z]*.*?\s*{/{/g TRANSFORMS-set_sourcetype = example_json [mainlog:json] TIME_PREFIX = log_time: #TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ MAX_TIMESTAMP_LOOKAHEAD = 3 INDEXED_EXTRACTIONS = json       TRANSFORMS.conf       [example_json] REGEX = \{\"json\"\: FORMAT = sourcetype::mainlog:json DEST_KEY = MetaData:Sourcetype       sample log:       2023-08-21 11:59:10 TRACE [pool-12-thread-1] c.a.l.m.e.AbstractElasticSearchBatch$ElasticSearchBatch [Slf4jLogging.scala:13] Deadline time left is 302ms and record count is 72 2023-08-21 11:11:41 TRACE [pool-11-thread-1] c.a.l.m.e.AbstractElasticSearchBatch$ElasticSearchBatch [Slf4jLogging.scala:13] Indexing {"json":"s3://example/logs/2023/08/21/0111111a-2222-33ff-9e4e-c1a01dfdf448.gz","phase":"ingest","log_time":"2023-08-21T15:11:31.073Z","tick":"7777777777","id":"0111111a-2222-33ff-9e4e-c1a01dfdf448","source_time":"2023-08-21T11:11:25Z","status":"submitted","client":"555555","environment":"test","category":"changestream","account":"9","level":7}