All Topics

Top

All Topics

hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallo... See more...
hello , i want to extract purple highlighted part. [Time:29-08@17:53:03.562] [60569219] 17:53:03.562 10.82.10.245 local3.notice [S=2952575] [SID=d57afa:30:1773441](N 71121555) #98)gwSession[Deallocated] [Time:29-08@17:53:03.562] [60569220]17:53:05.158 10.82.10.245 local3.notice [S=2952576] [SID=d57afa:30:1773434] (N 71121556) RtxMngr::Transmit 1 OPTIONS Rtx Left: 0 Dest: 211.237.70.18:5060, TU: AcSIPDialog(#28)(N 71121557) SIPTransaction(#471)::SendMsgBuffer - Resending last message[Time:29-08@17:53:05.158] [60569221] 17:53:05.654 10.82.10.245 local3.notice [S=2952577] [SID=d57afa:30:1773434] (N 71121558) RtxMngr::Dispatch - Retransmission of message 1 OPTIONS was ended. Terminating transaction... [Time:29-08@17:53:05.654] [60569222]17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223]17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245
Hello,  instance principal authentication not working in OC19 realm. Any plan to support OC19? The debug log contains: 2024-09-03 08:16:14,077 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/interme... See more...
Hello,  instance principal authentication not working in OC19 realm. Any plan to support OC19? The debug log contains: 2024-09-03 08:16:14,077 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/intermediate.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,413 DEBUG Starting new HTTP connection (1): x.x.x.x:80 2024-09-03 08:16:14,416 DEBUG http://x.x.x.x:80 "GET /opc/v2/instance/region HTTP/1.1" 200 14 2024-09-03 08:16:14,416 DEBUG Unknown regionId 'eu-frankfurt-2', will assume it's in Realm OC1 2024-09-03 08:16:14,636 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/cert.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,646 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/key.pem HTTP/1.1" 200 1675 2024-09-03 08:16:14,692 DEBUG http://x.x.x.x:80 "GET /opc/v2/identity/intermediate.pem HTTP/1.1" 200 None 2024-09-03 08:16:14,695 DEBUG Starting new HTTPS connection (1): auth.eu-frankfurt-2.oraclecloud.com:443 Thank you! NagyG
I have events from Trellix Hx appliance and i need to adjust _time of the log events   because it coming as 9/3/20 and we are on 9/3/2024  how can this be changeable.   thanks
Hello, I am currently working in a SOC, and I want to test rules in Splunk ES using the BOTSv2 dataset. How can I configure all the rules for it?
Hi Community, I got trouble when want to activate Use Case "User Login to Unauthorized Geo" it said Error because it said i don't have "sse_host_to_country" and "gdpr_user_category" lookup data.  ... See more...
Hi Community, I got trouble when want to activate Use Case "User Login to Unauthorized Geo" it said Error because it said i don't have "sse_host_to_country" and "gdpr_user_category" lookup data.  In this case im using ES Content Updates v 4.0.0 but i have my labs with ES Content Updates 4.38.0 but when i check it it don't have any sse_host_to_country OR gdpr_user_category lookup files. Already searching it in google and i don't have any answer. Maybe this community have enough experience about this.  Thanks     
There is no default solution in Splunk for managing the Frozen Bucket (Path). I wrote a script where you provide a config file specifying the volume or time limit for logs in the Frozen Path for each... See more...
There is no default solution in Splunk for managing the Frozen Bucket (Path). I wrote a script where you provide a config file specifying the volume or time limit for logs in the Frozen Path for each index. If the policy is violated, the oldest log is deleted. The script also provides detailed logs of the deletion process, including how much data and time remains in the Frozen Path for each index and how long the deletion process took. The entire script runs as a service and executes once every 24 hours. I’ve explained the implementation details and all necessary information in the link below.   Mohammad-Mirasadollahi/Splunk-Frozen-Retention-Policy: This repository provides a set of Bash scripts designed to manage frozen data in Splunk environments. (github.com)   FrozenFreeUp 
Hi -  We have a requirement to join the below eval statement searches, would it be possible if someone could assist with the solution please? eval search 1 = index="wpg" host=*pz-pay* OrderSummar... See more...
Hi -  We have a requirement to join the below eval statement searches, would it be possible if someone could assist with the solution please? eval search 1 = index="wpg" host=*pz-pay* OrderSummary | stats count AS "Total" eval search 2 = index="wpg" host=*pz-pay* OrderSummary AND "Address is invalid, it might contain a card number") | stats count AS "Failure" eval result =( search 1/search 2)*100 Thanks, Tom
Hi All I did a look around for a syntax definition for SPL in Notepad++ and didn't find one. Attached is my attempt. Feel free to use. if you have any suggestions, changes etc then post a reply. Th... See more...
Hi All I did a look around for a syntax definition for SPL in Notepad++ and didn't find one. Attached is my attempt. Feel free to use. if you have any suggestions, changes etc then post a reply. Thanks everyone
I found a similar post that did not quite fit the bill of what I am trying to do. I want to be able to create a link graph that shows a logical flow of all of our data from index>sourcetype>fields... See more...
I found a similar post that did not quite fit the bill of what I am trying to do. I want to be able to create a link graph that shows a logical flow of all of our data from index>sourcetype>fields. Issues I am running into: | fieldsummary does not work with metadata and thus does not include the index or sourcetype. |tstats search is only able to show index and sourcetype. I figure there is a base search I need to set up to pull the initial sourcetypes to run fieldsummaries on, but I'm not sure how to string these techniques together or if something like this is even feasible without leaving a very heavy burden on the cluster. I would like to make this a report that updates a lookup weekly so that the dashboard is referencing the lookup instead of running this search. Thanks in advance for your time!
Hi Team, We are using add-on to collect the Azure metrics through REST API. Data is getting ingested into Splunk cloud. However, we are seeing a lag of exactly 4 hours. Splunk Cloud is in UTC time z... See more...
Hi Team, We are using add-on to collect the Azure metrics through REST API. Data is getting ingested into Splunk cloud. However, we are seeing a lag of exactly 4 hours. Splunk Cloud is in UTC time zone. We have set the TZ=UTC  in HF apps/ local/props.conf as application is writing in UTC time. However, there is a lag in Splunk cloud.  https://splunkbase.splunk.com/app/3110 Any help is highly appreciated.
Other than poor speed and performance, is there a reason why the map command is considered dangerous? The official documentation says that the map command can result in data loss or potential securi... See more...
Other than poor speed and performance, is there a reason why the map command is considered dangerous? The official documentation says that the map command can result in data loss or potential security risks. But I don't see any details. Why?   https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Map    
Hi All, I am able to see only 4 status, why am I not able to see status=skipped and status = continued  
the index is appearing inside the indexer cluster dashboard inside cluster master but when i try to search it using Search Head i can't find any data i look at the splunkd inside one of the indexers ... See more...
the index is appearing inside the indexer cluster dashboard inside cluster master but when i try to search it using Search Head i can't find any data i look at the splunkd inside one of the indexers it's appears it working fine   should i do restart or something or do i need to change anything?
Hi , I want to extract this line from an event. RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy;
Hi, I am trying to configure AWS Lambda running in Node Js in AppD. I have subscribed to Serverless APM for AWS Lambda subscription. Node js version is 20.x We selected a lambda function and added ... See more...
Hi, I am trying to configure AWS Lambda running in Node Js in AppD. I have subscribed to Serverless APM for AWS Lambda subscription. Node js version is 20.x We selected a lambda function and added a layer then added environment variables via the console. After adding the variables the lambda is executed. But the application is not reporting in AppDynamics controller. What could be the reason. Is there any additional instrumentation required.  Also, please confirm on ARN version to be used, the function is hosted in us-east-1, also confirm whether runtime is compatible or not with Node js 20.  
Hello members,   I have clustered environment and i create index on HF and data inputs for receive syslog, I create the same index inside indexers.conf in cluster master then pushed the configurati... See more...
Hello members,   I have clustered environment and i create index on HF and data inputs for receive syslog, I create the same index inside indexers.conf in cluster master then pushed the configuration. the index not appears in indexer cluster in CM and not searchable i tried to use btool inside each indexer and appears my indexer on loaded indexers .   so what the problem .
Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 7: master_uri (value: https://<address>:8089). Invalid key in stanza ... See more...
Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 7: master_uri (value: https://<address>:8089). Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 8: pass4SymmKey (value: ***************************************). Invalid key in stanza [clustermaster:one] in /apps/splunk/splunk/etc/apps/100_gnw_cluster_search_base/local/server.conf, line 9: multisite (value: true)
I have a sample data pushed to Splunk as below: Help me with Splunk query where I want only unique server names with final status as second column. compare both horizontally & vertically for each ser... See more...
I have a sample data pushed to Splunk as below: Help me with Splunk query where I want only unique server names with final status as second column. compare both horizontally & vertically for each server second column status, The condition is if any of the second column value is No for that server then consider No as final status for that server, if all the second column values are Yes for a Server, then consider that server final status as Yes. sample.csv: ServerName, Status, Department, Company, Location Server1,Yes,Government,DRDO,Bangalore Server1,No,Government,DRDO,Bangalore Server1,Yes,Government,DRDO,Bangalore Server2,No,Private,TCS,Chennai Server2,No,Private,TCS,Chennai Server3,Yes,Private,Infosys,Bangalore Server3,Yes,Private,Infosys,Bangalore Server4,Yes,Private,Tech Mahindra,Pune Server5,No,Government,IncomeTax India, Mumbai Server6,Yes,Private,Microsoft,Hyderabad Server6,No,Private,Microsoft,Hyderabad Server6,Yes,Private,Microsoft,Hyderabad Server6,No,Private,Microsoft,Hyderabad Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Note : The Department, Location & Company is same for any given server, Only Server status differs for each row of the server. I already have a query to get the Final Status for a server. Below query gives me unique Final status count of each server. | eval FinalStatus = if(Status="Yes", 1, 0) | eventstats min(FinalStatus) as FinalStatus by ServerName | stats min(FinalStatus) as FinalStatus by ServerName | eval FinalStatus = if(FinalStatus=1, "Yes", "No") | stats count(FinalStatus) as ServerStatus But what I want is I have a 3 dropdown on the top of the classic dashboard where 1. Department 2. Company 3. Location   - Dropdown list  Whenever I select a department, or Company or Location from any of the dropdowns, I need to get the Final Status count of each server based on any of the fields search. For say, If Bangalore is selected from Location dropdown, I need to get the final status count for a servers. if i search a Company DRDO from dropdown, I should be able to get final status count for servers based on company. I think its like | search department="$department$" Company="$Company$" Location="$Location$" Please help with spunk query.
hello  I am getting a field port in event . ports="['22', '68', '6556']" how can i display them in separate rows.