All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Would it be possible to create a dashboard where we can receive alerts directly ?
Hi Community, I need to calculate the difference between two timestamps printed in log4j  logs of java application from two different searches, the timestamp is printed in the log after system time ... See more...
Hi Community, I need to calculate the difference between two timestamps printed in log4j  logs of java application from two different searches, the timestamp is printed in the log after system time keyword in the logs.  log for search -1 2024-07-18 06:11:23.438 INFO [ traceid=8d8f1bad8549e6ac6d1c864cbcb1f706 spanid=cdb1bb734ab9eedc ] com.filler.filler.filler.MessageLoggerVisitor [TLOG4-Thread-1-7] Jul 18,2024 06:11:23 GMT|91032|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.visitor.MessageLoggerVisitor|-|PRD01032 - Processor (Ingress Processor tlog-node4) processed message with system time 1721283083437 batch id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 correlation-id (f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001) and body (   log for search-2 2024-07-18 06:11:23.487 INFO [ traceid= spanid= ]  com.filler.filler.filler.message.processor.RestPublisherProcessor [PRD-1] Jul 18,2024 06:11:23 GMT|91051|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.processor.RestPublisherProcessor|-|PRD01051 - Message with correlation-id f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001 successfully published at system time 1721283083487 to MCD   I am using below query to calculate the time difference but end up in duplicates and lot of null values, these null values are coming only when i do the calculations for individual searches null values don't pop up. "sourcetype=log4j | rex "91032\|PRD\|SYSTEM\|test\-01\.Autodeploy\-profiles\-msgdeliver\|10\.12\.163\.65\|\-\|\-\|\-\|\-\|com\.filler\.filler\.filler\.message\.visitor\.MessageLoggerVisitor\|\-\|PRD01032 \- Processor (.*?) processed message with system time (?<systime_batch>.+) batch id (.*?) correlation-id \((?<corrid_batch>.+)\) and body" | rex "com\.filler\.filler.filler\.message\.processor\.RestPublisherProcessor\|\-\|PRD01051 \- Message with correlation\-id \((?<corrid_mcd>.+)\) successfully published at system time (?<systime_mcd>.+) to MCD" | dedup corrid_batch | eval diff = (systime_mcd-systime_batch) | where corrid_mcd=corrid_batch | table diff" Kindly help in a
I saw the following text in the documentation:   When ingesting metrics data, each metric event is measured by volume like event data. However, the per-event size measurement is capped at 150 bytes... See more...
I saw the following text in the documentation:   When ingesting metrics data, each metric event is measured by volume like event data. However, the per-event size measurement is capped at 150 bytes. Metric events that exceed 150 bytes are recorded as only 150 bytes. Metric events less than 150 bytes are recorded as event size in bytes plus 18 bytes, up to a maximum of 150 bytes. Metrics data draws from the same license quota as event data.    I'm wondering how splunk handles multi-metrics with the dimensions and tags.  Here an example:   { Tag1: Cross-Direction (CD) Type: CSV Unit: LS77100 Groupe: Traverse metric_name: LS77100.Traverse.Y1: 1.15 metric_name: LS77100.Traverse.Y2: 2.13 metric_name: LS77100.Traverse.Y3: 2.14 metric_name: LS77100.Traverse.Y4: 1.16 }    So what is count here as a Byte? So do I have to pay for every character after "metric_name:"? And what is with the Tags above: Do I pay for one tag like Tag1 or Unit in this example four times? In this example I just got four points in reality that are around 3000 points. In the moment I'm sending the information as an event to splunk. I think about to ingest them as metrics because i guess there are better in performance. Maybe another way is to send it as an event, split them and make mcollect, not sure what is the best way. 
With load balancing the Universal Forwarder sends data to all the indexers equally so that no indexer should get all the data and together the indexers holds all the data. It also provide automatic s... See more...
With load balancing the Universal Forwarder sends data to all the indexers equally so that no indexer should get all the data and together the indexers holds all the data. It also provide automatic switchover capability incase of an indexer goes down. Load balancing can be setup at UF in outputs.conf file in two ways:   By time By Volume   For time based load balancing we used autoLBFrequency setting and for volume we use autoLBVolume. Let's say I've three indexers on which I want to send data from UF. My outputs.conf file will look like below: [tcpout: my_indexers] server=10.10.10.1:9997, 10.10.10.2:997, 10.10.10.3:9997 Now, to send data for 3 minutes to an indexer, then switch to another indexer and then to another, set the autoLBFrequency like this: autoLBFrequency=180 Based on the above settings the UF will send data to indexer 10.10.10.1 for 3 minutes continuously then it will move towards the other indexers, and this loop will continue. To send data based on the volume. Let's say to configure the UF to send 1MB data to an indexer then switch to another indexer in the list, the setting will look like below autoLBVolume=1048576 In the cases of a very large file, such as a chatty syslog file, or loading a large amount of historical data, the forwarder may become "stuck" on one indexer, trying to reach EOF before being able to switch to another indexer. To mitigate this, you can use the forceTimebasedAutoLB setting on the forwarder. With this setting, the forwarder does not wait for a safe logical point and instead makes a hard switch to a different indexer every AutoLB cycle. forceTimebasedAutoLB = true To guard against loss of data when forwarding to an indexer you can enable indexer acknowledgment capability. With indexer acknowledgment, the forwarder will resend any data that the indexer does not acknowledge as "received". useACK setting is used for this purpose useACK= true The final output.conf will look like below [tcpout] useACK= true autoLBFrequency=180 autoLBVolume=1048576 [tcpout: my_indexers] server=10.10.10.1:9997, 10.10.10.2:997, 10.10.10.3:9997
how to download company certificate splunk partner
Hello, I need help with a search query, that at first seem easy but suprising difficult to execute. I have a money transaction db between 2 person, now I have to find which person send out more money... See more...
Hello, I need help with a search query, that at first seem easy but suprising difficult to execute. I have a money transaction db between 2 person, now I have to find which person send out more money than they receive, and output all of their transaction (both send and receive).  My query is like so    index=myindex |eventstats sum(AMOUNT) as total_sent by ACCOUNT_FROM |eval temp=ACCOUNT_FROM |table _time ACCOUNT_TO ACCOUNT_FROM TRACE total_sent INFO temp |join type=inner temp [search index=myindex |stats sum(AMOUNT) as total_received by ACCOUNT_TO |eval temp=ACCOUNT_TO] |where total_sent > total_receive   This query only produce the transaction at which that account sending out but not the transaction that that account receive. How do I go about this. I'm thinking about output the temp as an csv and inputlookup again in the db.
Does Splunk Enterprise support URI Templates to extend the REST API? I'd like to be able to expose an api such as /resource/{name}/dosomething  
Created the inputs in the local folder of the apps to use UDP on the heavy forwarder, set the sourcetype to f5:bigip:syslog, but the app is not splitting the data into multiple sourcetypes like it sa... See more...
Created the inputs in the local folder of the apps to use UDP on the heavy forwarder, set the sourcetype to f5:bigip:syslog, but the app is not splitting the data into multiple sourcetypes like it says it is supposed to. Can someone let me know what can be done to get this to work.
I wrote this query to help look for multiple Autonomous System Number (ASN) values and multiple user agent values in a user’s Okta session as this is an indication of a session hijack. I have cr... See more...
I wrote this query to help look for multiple Autonomous System Number (ASN) values and multiple user agent values in a user’s Okta session as this is an indication of a session hijack. I have created this search which works as needed   index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "Session ID"='authenticationContext.externalSessionId' | eval "User"='actor.alternateId' | eval "Risk"='debugContext.debugData.risk' | stats dc("user_agent") as "Agent Count" values(user_agent) AS "User Agent" dc(ASN) as "ASN Count" values(ASN) as ASN dc(Risk) as "Risk Count" values(Risk) as Risk by User "Session ID" | table "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk | search "ASN Count" > 1 AND "Agent Count" > 1   Session ID ASN ASN Count User Agent Agent Count User Risk idxxxxxxxxxxxx 12345 321 2 UserAgent1 UserAgent2 2 user@company.com {reasons=Anomalous Device, level=MEDIUM} idxxxxxxxxxxxx 6789 321 2 UserAgent1 UserAgent2 2 user@company.com {reasons=Anomalous Device, level=MEDIUM} The issue is I am not getting only anomalous activity as expected, but many false positives as most session IDs legitimately have more than one ASN attached to the session. My thought was to create a lookup (asn_user.csv) that will eventually be updated through a scheduled search (at a slower rate that the main search is ran) to append new data to gather the User and ASNs that have had a successful transaction with using this search:   index="okta" actor.alternateId=*@* authenticationContext.externalSessionId!="unknown" | eval "ASN"='securityContext.asNumber' | eval "User"='actor.alternateId'| table ASN User | dedup ASN User   ASN User 12345 user@company.com 321 user@company.com   My issue right now is trying to use the lookup against the main search. The goal is IF the ASN is new to the user from the main okta search (meaning the ASN is not seen in the lookup file, asn_user.csv) then return the | table  "Session ID", ASN, "ASN Count", "User Agent", "Agent Count", User, Risk  | search "ASN Count" > 1 AND "Agent Count" > 1 results with the anomalous ASN while still meeting the "ASN Count" > 1 AND "Agent Count" > 1 requirement  Does anyone have some ideas to accomplish this?
Hello!  Wondering if someone can help me fine tune my query.  I have it very close but not quite what I want. Here is my query in addition to screenshot of what the results currently look like: [... See more...
Hello!  Wondering if someone can help me fine tune my query.  I have it very close but not quite what I want. Here is my query in addition to screenshot of what the results currently look like: [SomeSearch] | stats sum(FRCount) as totalHourCount by QSDateOfTxn, QSHourOfTxn limit=1000 | eval PPM = round(totalHourCount/60,2) | transpose 0 header_field=QSHourOfTxn I would like the date to be a column (it is possible there would be multiple dates) but still show both totalHourCount and PPM as is per every hour. Thanks, David  
How does Splunk AI assistant keep customer data confidential?
What I want to do is summarize a completed transaction of ActivityID's like Windows updates.  However, I do not know if the ActivityID is reused again after a reboot and may not be a part of the orig... See more...
What I want to do is summarize a completed transaction of ActivityID's like Windows updates.  However, I do not know if the ActivityID is reused again after a reboot and may not be a part of the original transaction or a period of time passes within 24 hours of my reports and the ActivityID is reused again.  Disclaimer, I do not know that much about Microsoft Events... so maybe this sounds all wrong?
I have an outside SAML system (Okta) which we are using to login to our Splunk system and we are defining indexes for people in different buildings to work against (named after thebuildings). The pro... See more...
I have an outside SAML system (Okta) which we are using to login to our Splunk system and we are defining indexes for people in different buildings to work against (named after thebuildings). The problem is that people move around from buildings to building and they seem to accrete access to virtually every index (building) and we need to stop this by making sure that everyone only gets the access that they need for their building on its own (so creating and revoking access is all controlled within Okta. The other issue is that our organisation moves buildings quite often (due to the nature of the business). So... I have created the following: Okta User->Okta group Splunk Role->Building Index I need to be able to programmatically make the link of SAML Group->Splunk Role. I can read the link between SAML Group and Splunk Role with the REST API using the information in the following page (using /services/admin/SAML-groups), but I cannot find any documentation about creating  and deleting the links. https://docs.splunk.com/Documentation/Splunk/9.2.2/RESTREF/RESTaccess I know that I can maintain the links using the information the below URL, but not programmatically as yet. https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Modifyorremoverolemappings Does anyone know how I can do this programmatically, please?
Hello, anyone already had similar error :  “07-17-2024 14:28:21.721 +0200 WARN PasswordHandler [38267 TcpChannelThread] - Unable to decrypt passwords.conf/[credential:REST_CREDENTIAL#TA-thehive-cor... See more...
Hello, anyone already had similar error :  “07-17-2024 14:28:21.721 +0200 WARN PasswordHandler [38267 TcpChannelThread] - Unable to decrypt passwords.conf/[credential:REST_CREDENTIAL#TA-thehive-cortex#configs/conf-ta_thehive_cortex_account:service_thehivesplunk_cred_sep2:]/password” This was solved by using clear-text or (it seems) pushing addon with SHC "already encrypted password." Thanks.  
Since Microsoft Teams is deprecated 0365 connectors standard incoming webhooks and usage of MessageType cards for sending message This Microsoft Teams messages publication addon is not working for w... See more...
Since Microsoft Teams is deprecated 0365 connectors standard incoming webhooks and usage of MessageType cards for sending message This Microsoft Teams messages publication addon is not working for workflow endpoint. Also using standard webhook and providing workflow URL is returning errors since the payload is not in the format of the Adaptable card API message that workflows expect do you have a solution how to connect alerts with Microsoft teams channels now since this depreciation of connectors
Hello   I'd like to create a single value viz that displays the percent change from a pint in time to now. Basically, I have a dashboard that has a panel that simply counts the number of records ... See more...
Hello   I'd like to create a single value viz that displays the percent change from a pint in time to now. Basically, I have a dashboard that has a panel that simply counts the number of records in the given timerange. The time is a simple time picker and the base search is a simple: index=myindex | stats count I would like to add a panel, maybe single viz, that shows a percent change. For example, if the default is "Last 24 hours" I would like to show the count of the last 24 hours and the percent change from the previous 24 hours. Additionally, if the user selected "Last 7 days" i would like it to give the count of the last 7 days and the percent change from 7 days before that.   Thanks for the help
While there can be so many reasons for memory growth, one of the reason could be increased memory usage by idle search process pool(search-launcher).       index=_introspection component=PerProce... See more...
While there can be so many reasons for memory growth, one of the reason could be increased memory usage by idle search process pool(search-launcher).       index=_introspection component=PerProcess host=<any one SH or IDX host> | timechart span=5s sum(data.mem_used) as mem_usedMB by data.process_type useother=f usenull=f       Example   If memory usage by `search-launcher` is way higher than `search` Then idle search process pool(search-launcher) is wasting system memory. If you see above trend, we want to reduce idle search process pool. There are several options to reduce idle search process pool in limits.conf One option is to set enable_search_process_long_lifespan = false in server.conf( new option in 9.1 and above)   enable_search_process_long_lifespan = <boolean> * Controls whether the search process can have a long lifespan. * Configuring a long lifespan on a search process can optimize performance by reducing the number of new processes that are launched and old processes that are reaped, and is a more efficient use of system resources. * When set to "true": Splunk software does the following: * Suppresses increases in the configuration generation. See the 'conf_generation_include' setting for more information. * Avoids unnecessary replication of search configuration bundles. * Allows a certain number of idle search processes to live. * Sets the size of the pool of search processes. * Checks memory usage before a search process is reused. * When set to "false": The lifespan of a search process at the 50th percentile is approximately 30 seconds. * NOTE: Do not change this setting unless instructed to do so by Splunk Support. * Default: true   Why idle search process pool appears to be un-used(more idle searches compared to the actual number of searches running on peer)? Before a search request is dispatched to peers, SHCs/SHs also need to first find  the common knowledge bundle across peers. On peer, only an idle search process created with matching common knowledge bundle is eligible for re-use. That's why in most cases idle search process pool remains un-used as overall idle search process pool is a collection of idle search processes  having association with different knowledge bundles.  Now think of a scenario having multiple SHC clusters (example ES/ITSI/ad-hoc etc). Each SH cluster replicating it's own knowledge bundles. The idle search process pool is a collection of idle search processes  having association with different knowledge bundles from different search heads. You can search enable_search_process_long_lifespan in limits.conf for the impact. It  controls lot of configs.  But the main reason for memory growth is  max_search_process_pool (default 2048 idle search process pool). max_search_process_pool = auto | <integer> * The maximum number of search processes that can be launched to run searches in the pool of preforked search processes. * The setting is valid if the 'enable_search_process_long_lifespan' setting in the server.conf file is set to "true". * Use this setting to limit the total number of running search processes on a search head or peer that is prevented from being overloaded or using high system resources (CPU, Memory, etc). * When set to "auto": Splunk server determines the pool size by multiplying the number of CPU cores and the allowed number of search processes (16). The pool size is 64 at minimum. * When set to "-1" or another negative value: The pool size is not limited. * Has no effect on Windows or if "search_process_mode" is not "auto". * Default: 2048   If an instance is running 1000 searches per minute, assuming bundle replication is not frequent, why to create 2048 idle searches pool when the max requirement is 1000? With surplus memory resource, this is not an issue. 2048 idle searches pool is not ok for limited memory instances.
Hi Community, We are using the Splunk Enterprise. From the Splunk Search & Reporting, how can we sum the site's traffic, like the monthly bandwidth? Thanks, Steve
I am trying to create a table showing the ingestion (usage) in GB by index over the past 24 hours. I am using this search to do that successfully:  index=_internal source="/opt/splunk/var/log/splun... See more...
I am trying to create a table showing the ingestion (usage) in GB by index over the past 24 hours. I am using this search to do that successfully:  index=_internal source="/opt/splunk/var/log/splunk/license_usage.log" type=Usage idx=* | stats sum(b) as Usage by idx | eval Usage=round(Usage/1024/1024/1024,2) | rename idx AS index | sort -Usage Now i would like to add a sparkline next to the Usage column showing the trend of ingestion over the past 7 days for each index. How can i do this ?
Hello everyone! I've created a custom alert action with an HTML file located at Splunk\etc\apps\my-app\local\data\ui\alerts\index.html, and my JavaScript and CSS files are in Splunk\etc\apps\my-app\a... See more...
Hello everyone! I've created a custom alert action with an HTML file located at Splunk\etc\apps\my-app\local\data\ui\alerts\index.html, and my JavaScript and CSS files are in Splunk\etc\apps\my-app\appserver\static\index.js. My goal is to dynamically add fields to a form using JavaScript inside my HTML file. I'm encountering challenges with loading the JavaScript due to potential security concerns in Splunk apps. Despite this, I'm looking for a solution to implement this small functionality. Any assistance would be greatly appreciated. Thank you for your help! HTML code -  <!DOCTYPE html> <html> <head> <title>Custom Alert Action</title> <script></script> <script src="../../../../appserver/static/index.js"></script> </head> <body> <h1>Custom Alert Action</h1> <button id="performActionBtn">Perform Action</button> </body> </html> JS code - require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { console.log('js loaded successfully'); function createfunction() { alert('js loaded successfully!'); } $(document).ready(function() { console.log('js loaded successfully'); $('#performActionBtn').click(function(event) { event.preventDefault(); createfunction(); }); }); }); hi dear @avikramengg, I saw a similar question you asked earlier. Have you found a solution? If so, could you please advise me as well? Thanks!