All Topics

Top

All Topics

Hi Team, I am setting up an alert on Splunk where my data is in below format.  I am writing a query where it returns those row only where CertExpiry is in15 days. Basically alert should trigger if... See more...
Hi Team, I am setting up an alert on Splunk where my data is in below format.  I am writing a query where it returns those row only where CertExpiry is in15 days. Basically alert should trigger if cert is getting expired in next 15days. Component  Server CertExpiry Zone.jar sample September 13, 2023 9:49:49 AM CDT  
Hi All,  I have a requirement to add new members to the existing SH Cluster. I have gone through the below link where it explains about adding member to the SHC.  Add a cluster member - Splu... See more...
Hi All,  I have a requirement to add new members to the existing SH Cluster. I have gone through the below link where it explains about adding member to the SHC.  Add a cluster member - Splunk Documentation How do I Integrate it with the CM/Indexers? Do I need to do it after the above one? Is there any link?
Hi All,  which page in UBA shows closed threats or how can we view the threats that have been closed out ?  By default the "threats" review page shows all open or active ones.  I couldn't find any... See more...
Hi All,  which page in UBA shows closed threats or how can we view the threats that have been closed out ?  By default the "threats" review page shows all open or active ones.  I couldn't find any option in the Top Menu or filter to take us to list of the threats that have been closed out.  
Dear Team,  I am creating use cases in enterprise security. Earlier I use to find user case list (around600+) in splunk essential doc site.  With clear explanation about use case and required logs ... See more...
Dear Team,  I am creating use cases in enterprise security. Earlier I use to find user case list (around600+) in splunk essential doc site.  With clear explanation about use case and required logs source and sample logic for use case creation. But now iam unable to find the use case url in Splunk site. Could you please help to find the url for Aws use cases creation regards,   
I have an index, where each event is a JSON object, the structure is as follows:       { "otherFields": "otherValue", "fields": [ { "id": 123, ... See more...
I have an index, where each event is a JSON object, the structure is as follows:       { "otherFields": "otherValue", "fields": [ { "id": 123, "value": "ABC" }, { "id": 456, "value": "DEF" } ] }       Now, I want to extract the "id" value and rename it with meaningful names, such that the renamed field can be seen at the "interesting field" section in the search app and I can use it in timechart command to filter for the value.  I have tried something like:       | mvexpand fields{}.id | search feilds{}.id=123 | rename fields{}.id AS "Product Name"        And in the interesting field section, it gave me something like: Current ouput But I want it to look like the id with the corresponding value: My desired output    Not only I want to rename one but multiple ids, how may I do so?
Hello, I am facing issue, in my Java agent application i couldn't find JDBC connection pool details there is only thread details available, anyone have any idea about this and how to resolve this is... See more...
Hello, I am facing issue, in my Java agent application i couldn't find JDBC connection pool details there is only thread details available, anyone have any idea about this and how to resolve this issue?
Hi All, Below is my raw log 2023-08-08 10:25:48.389 [INFO ] [Thread-3] CollateralProcessor - Completed calculating total balances: Opening Balance: 27321564738.9 Closing Balance: 27223794872.86 A... See more...
Hi All, Below is my raw log 2023-08-08 10:25:48.389 [INFO ] [Thread-3] CollateralProcessor - Completed calculating total balances: Opening Balance: 27321564738.9 Closing Balance: 27223794872.86 Age Total Balance: 27223794872.86 Collateral Sum: 27223794872.86 Its coming like this: I tried with this but no result: index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" "CollateralProcessor - Completed calculating total balances:"| rex "CollateralProcessor - Completed calculating total balances: Opening Balance=(?<Opening Balance>)"|table Opening Balance  
Hi All, trying to install an app I have locally via API. I have tried both curl command and python script Curl Command: curl -k -u admin:YOUR_SPLUNK_PASSWORD \ -X POST https://YOUR_SPLUNK_HOST:... See more...
Hi All, trying to install an app I have locally via API. I have tried both curl command and python script Curl Command: curl -k -u admin:YOUR_SPLUNK_PASSWORD \ -X POST https://YOUR_SPLUNK_HOST:8089/services/apps/local \ -H "Content-Type: multipart/form-data" \ -F "name=appname" \ -F "appfile=@/path/to/your/app_package.tar.gz" The error I get is the following: <msg type="ERROR">Error during app install: failed to extract app from app_package.tar.gz to /opt/splunk/var/run/splunk/bundle_tmp/7391c87f2a023fd5: No such file or directory</msg> Python script: import splunklib.client as client import splunklib.results as results import requests # Splunk server details splunk_host = 'hostname' splunk_port = 8089 splunk_username = 'splunk_user' splunk_password = 'splunk_password' # App installation details app_package_path = '/path/to/custom_app.tgz' app_name = 'custom_app' # Connect to Splunk service = client.connect( host=splunk_host, port=splunk_port, username=splunk_username, password=splunk_password ) # Install the app try: endpoint = f'/services/apps/local' headers = {'Authorization': f'Splunk {service.token}'} files = {'app_package': open(app_package_path, 'rb')} data = {'app': app_name} app_response = requests.post( f'https://{splunk_host}:{splunk_port}{endpoint}', headers=headers, files=files, data=data, verify=False # Disabling SSL certificate verification (for self-signed certificates) ) app_status = app_response.status_code if app_status == 200: print(f"App '{app_name}' was successfully installed.") else: print(f"Failed to install app '{app_name}'. Status code: {app_status}") except Exception as e: print(f"An error occurred: {str(e)}") # Disconnect from Splunk service.logout()   The error I get: /usr/lib/python3/dist-packages/urllib3/connectionpool.py:1015: InsecureRequestWarning: Unverified HTTPS request is being made to host '127.0.0.1'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings warnings.warn( Failed to install app 'custom_app'. Status code: 400 In both cases, I have confirmed that I am able to connect and query Splunk, so there shouldn't be any connectivity issues. I have also confirmed that I can manually install the app so there shouldn't be any issues with the tgz file.
I'm using Splunk enterprise Version: 8.2.7 I'm trying to get a session key then run a search through the rest api. Requesting the login through curl works: C:\Users\A0493110>curl -k https://lflvs... See more...
I'm using Splunk enterprise Version: 8.2.7 I'm trying to get a session key then run a search through the rest api. Requesting the login through curl works: C:\Users\A0493110>curl -k https://lflvsplunksh01:8089/services/auth/login --data-urlencode username=a0493110 --data-urlencode password=mypassword <response> <sessionKey>7AH24BVGEB^64CzSgJrZWyI4kMAASmOMC395npKhZEwxG0g3Leh6Kpm5uxRTLWoSz07gTgbPqqlcHCJAomHMIRniHO1FgY2kimJBYYirzq1WJZQm</sessionKey> <messages> <msg code=""></msg> </messages> </response> But requesting the login using Insomnia (a rest API endpoint tester) the login Fails. I am sending the login credentials in json as described in the splunk tutorial. <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">Login failed</msg> </messages> </response> * Preparing request to https://lflvsplunksh01:8089/services/auth/login * Current time is 2023-08-08T22:23:10.266Z * Enable automatic URL encoding * Using default HTTP version * Disable SSL validation * Uses proxy env variable no_proxy == 'localhost,127.0.0.1,.micron.com,addmmsi' * Too old connection (18958 seconds), disconnect it * Connection 7 seems to be dead! * Closing connection 7 * TLSv1.2 (OUT), TLS header, Unknown (21): * TLSv1.2 (OUT), TLS alert, decode error (562): * Hostname in DNS cache was stale, zapped * Trying 10.192.88.222:8089... * Connected to lflvsplunksh01 (10.192.88.222) port 8089 (#8) * ALPN, offering h2 * ALPN, offering http/1.1 * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS header, Finished (20): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS header, Finished (20): * TLSv1.2 (IN), TLS header, Certificate Status (22): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server did not agree to a protocol * Server certificate: * subject: CN=SplunkServerDefaultCert; O=SplunkUser * start date: Apr 19 22:58:51 2023 GMT * expire date: Apr 18 22:58:51 2026 GMT * issuer: C=US; ST=CA; L=San Francisco; O=Splunk; CN=SplunkCommonCA; emailAddress=support@splunk.com * SSL certificate verify result: self-signed certificate in certificate chain (19), continuing anyway. * TLSv1.2 (OUT), TLS header, Supplemental data (23): > POST /services/auth/login HTTP/1.1 > Host: lflvsplunksh01:8089 > User-Agent: insomnia/2023.4.0 > Content-Type: application/json > Accept: */* > Content-Length: 52 | { | "username": "a0493110", | "password": "mypassword" | } * TLSv1.2 (IN), TLS header, Supplemental data (23): * Mark bundle as not supporting multiuse < HTTP/1.1 400 Bad Request < Date: Tue, 08 Aug 2023 22:23:10 GMT < Expires: Thu, 26 Oct 1978 00:00:00 GMT < Cache-Control: no-store, no-cache, must-revalidate, max-age=0 < Content-Type: text/xml; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 129 < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd * TLSv1.2 (IN), TLS header, Supplemental data (23): * Received 129 B chunk * Connection #8 to host lflvsplunksh01 left intact Any help would be greatly appreciated.  I want to get it working first in Insomnia then in a .net client I am writing.
Hi, I have a splunk source which does have data ingestion from multiple servers, i want to setup an alert on that source on a specific condition that if a particular message does not appear for 6 h... See more...
Hi, I have a splunk source which does have data ingestion from multiple servers, i want to setup an alert on that source on a specific condition that if a particular message does not appear for 6 hour alert should be triggered below is an example to search the string  index=index1 source = source1 host=host1 "got the message" so if i dont find the message "got the message" for 6 hours i want to trigger an alert .
Hello, I'm trying to set up an alert when someone creates or modifies an Active Directory account with a uidNumber that already exists in another account. I already have a search that finds changes... See more...
Hello, I'm trying to set up an alert when someone creates or modifies an Active Directory account with a uidNumber that already exists in another account. I already have a search that finds changes to accounts (below). I want to modify this search so that if the Property that changed is "uidNumber" then search ldap to see if it already exists on another account, and send an alert that contains both new and existing accounts names, uidnumber, and admin that made the change. This is the current search I have to find all changes As a separate sort of related question - any idea why when I remove "obj_dn" from the table command I get no results at all? I'm using ldapfilter here to get the cn of an object using the obj_dn field, but I didn't think I needed it anymore after that     index=wineventlog EventCode=5136 sourcetype=WinEventLog | sort -_time | ldapfilter domain=*** search="(DistinguishedName=$obj_dn$)" attrs="cn" | rename cn as affected_user, LDAP_Display_Name as Property, dir_svcs_action as action | table _time, Account_Name, Property, Value, action, affected_user, obj_dn  
Hi All, I am using below query to fetch my records: index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" source="/amex/app/gf... See more...
Hi All, I am using below query to fetch my records: index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" "Server side call completed for Collateral with record count"| rex "Server side call completed for Collateral with record count:\s+(?<record>\d+)"|timechart span=1d values(record) AS RecordCount I am getting records as below: How can I separate them .Can someone guide 
Hi All, Below is my raw log and I want to fetch the highlighted value from it: 2023-08-08 10:25:48.407 [INFO ] [Thread-3] CollateralProcessor - compareCollateralStatsData : statisticData: Statistic... See more...
Hi All, Below is my raw log and I want to fetch the highlighted value from it: 2023-08-08 10:25:48.407 [INFO ] [Thread-3] CollateralProcessor - compareCollateralStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=0, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busDt=08/06/2023, fileName=SETTLEMENT_TRANSFORM_COLLATERAL_LENDING, totalAchCurrOutstBalAmt=2.722379487286E10, totalAchBalLastStmtAmt=2.722379487286E10, totalClosingBal=2.722379487286E10, sourceName=null, version=0, associationStats={}] with collateralSum 2.722379487286E10 openingBal 2.73215647389E10 ageBalTot 2.722379487286E10 busDt 08/07/2023 with prevStatisticData null Below is my query but I am not able to fetch that: index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" "CollateralProcessor - compareCollateralStatsData"|rex "CollateralProcessor - compareCollateralStatsData busDt=(?<busDt>),fileName=(?<fileName>),collateralSum =(?<collateralSum>)"|table busDt fileName collateralSum | sort busDt Can someone guide me how I can fetch highlight
Hi All, I have two raw logs and I want to fetch the value inside it: 2023-08-08 10:25:48.407 [INFO ] [Thread-3] CollateralProcessor - Statistic Cache loaded with stats for first run of Collateral B... See more...
Hi All, I have two raw logs and I want to fetch the value inside it: 2023-08-08 10:25:48.407 [INFO ] [Thread-3] CollateralProcessor - Statistic Cache loaded with stats for first run of Collateral Balancing with statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=0, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busDt=08/06/2023, fileName=SETTLEMENT_TRANSFORM_COLLATERAL_LENDING, totalAchCurrOutstBalAmt=2.722379487286E10, totalAchBalLastStmtAmt=2.722379487286E10, totalClosingBal=2.722379487286E10, sourceName=null, version=0, associationStats={}] 2023-08-08 10:25:40.069 [INFO ] [Thread-3] CollateralProcessor - Statistic Cache loaded with stats for first run of Collateral Balancing with statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=0, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busDt=08/06/2023, fileName=SETTLEMENT_TRANSFORM_COLLATERAL_CHARGE, totalAchCurrOutstBalAmt=4.81457540293E9, totalAchBalLastStmtAmt=4.81457540293E9, totalClosingBal=4.81457540293E9, sourceName=null, version=0, associationStats={}] But the issue is I am not able to create separately its taking one only: My query: index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" "CollateralProcessor - Statistic Cache loaded with stats for first run of Collateral Balancing with statisticData: StatisticData"|rex " CollateralProcessor - Statistic Cache loaded with stats for first run of Collateral Balancing with statisticData: StatisticData busDt=(?<busDt>),fileName=(?<fileName>),totalAchCurrOutstBalAmt=(?<totalAchCurrOutstBalAmt>)"|table busDt fileName totalAchCurrOutstBalAmt|sort busDt Result: I want to create separately for SETTLEMENT_TRANSFORM_COLLATERAL_LENDING and ETTLEMENT_TRANSFORM_COLLATERAL_CHARGE. Please help.  
Estoy trabajando en un panel de equipo para eventos de carpetas compartidas, sin embargo no recibo eventos con eventcode 4663, ya habilité la política de auditoría y en las carpetas que quiero monito... See more...
Estoy trabajando en un panel de equipo para eventos de carpetas compartidas, sin embargo no recibo eventos con eventcode 4663, ya habilité la política de auditoría y en las carpetas que quiero monitorear hice lo mismo, me guié con el siguiente enlace:  https://www.lepide.com/how-to/track-changes-made-to-files-of-shared-folder.html Pero sigo sin recibir eventos de carpetas compartidas en splunk, alguien sabe que mas debo revisar para recibir este tipo de eventos? I am working on a team panel for shared folder events, however I do not receive events with eventcode 4663, I have already enabled the audit policy and in the folders that I want to monitor I did the same, I was guided by the following link: https:// www.lepide.com/how-to/track-changes-made-to-files-of-shared-folder.html But I still do not receive events from shared folders in splunk, does anyone know what else I should check to receive these types of events? *post translated to English by Splunk staff
Hi Team, Below are my raw logs: 2023-08-08 10:25:13.067 [INFO ] [Thread-3] CollateralProcessor - Server side call completed for Collateral with record count:   476 2023-08-08 09:56:03.777 [INFO... See more...
Hi Team, Below are my raw logs: 2023-08-08 10:25:13.067 [INFO ] [Thread-3] CollateralProcessor - Server side call completed for Collateral with record count:   476 2023-08-08 09:56:03.777 [INFO ] [Thread-3] CollateralProcessor - Server side call completed for Collateral with record count:  18541701 I am using below query to fetch the number   index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" "Server side call completed for Collateral with record count"| rex "Server side call completed for Collateral with record count :(?<record>\d+)"|timechart span=1d values(record) AS RecordCount I am not getting record count just date. Can someone guide here
I'm trying to figure out why you would use the various methods for sending search results to an index. Note, I'm not trying to speed up searches, I'm just looking at methods for writing search result... See more...
I'm trying to figure out why you would use the various methods for sending search results to an index. Note, I'm not trying to speed up searches, I'm just looking at methods for writing search results to an index. Pipe to the "collect" command with the default "stash" source type, supposedly avoiding license usage. Pipe to the "collect" command with a specified source type, incurring license usage. Pipe to the "sendalert" command with the "logevent" alert action specified, or if it's a saved search or alert, use the Log Event alert action Effectively, I think they all do the same thing. Option 1 seems like a sneaky way around license usage. However, I think "collect" can only be used with summary indexes. Any thoughts on this?
Hello, I'm creating a visualization and attempting to show the total amount of events, and break them down by a specific field.  So my initial search would be something like the search below, whe... See more...
Hello, I'm creating a visualization and attempting to show the total amount of events, and break them down by a specific field.  So my initial search would be something like the search below, where I just count all the events for a specific sourcetype. index=foo sourcetype=bar | stats count as "Total Events for Security Control" The other searches would filter these by evaluating a third field and counting the ones that are true for the condition and the ones that are not. index=foo sourcetype=bar baz="Blocked" | stats count as "Total Blocked" index=foo sourcetype=bar baz!="Blocked" | stats count as "Total Blocked" The issue that I'm seeing is that for one of my sourcetype, the total number of events is not equal to the sum of the breakdown searches. Any idea as to why this might be happening?
Hey guys,  Hope you're having a great day! Basically, I want to show/hide multiple sets of rows and panels depending on the main drop-down selected. Is there a <section depends=$choice1$> </sectio... See more...
Hey guys,  Hope you're having a great day! Basically, I want to show/hide multiple sets of rows and panels depending on the main drop-down selected. Is there a <section depends=$choice1$> </section> kind of tag that can have multiple rows inside it?   Right now my only workaround is to display a link to that specific dashboard. I want to be able to basically merge all these dashboards into one and then just select by the main drop down. Thank you, Best,
Hi Team, Lets just assume a scenario where I am streaming json data in S3 which needs be  consumed by Splunk cloud platform and needs to displayed in studio dashboard in real-time.   The data ( json... See more...
Hi Team, Lets just assume a scenario where I am streaming json data in S3 which needs be  consumed by Splunk cloud platform and needs to displayed in studio dashboard in real-time.   The data ( json data ) streamed into S3 as events ( approximately 100-250 per second ) and this data needs to be displayed in dashboard in real time.  In this scenario , would it be correct to use SQS based S3 approach ? Thanks.