All Topics

Top

All Topics

i got this error failed to: delete_local_spark_dirs on and failed to:force_kill_spark_jvms on when i run /opt/caspida/bin/Caspida start-all . Any idea how i can resolve this?   -I was not able to a... See more...
i got this error failed to: delete_local_spark_dirs on and failed to:force_kill_spark_jvms on when i run /opt/caspida/bin/Caspida start-all . Any idea how i can resolve this?   -I was not able to access the web ui and I was running the cmd (on UBA manager) /opt/caspida/bin/Caspida stop-all . There was an error . And when I tried to run the start-all, it shows the same error.
We have setup one alert which should trigger for every 1 hour When we run the alert query it is showing up the results but we did not received mail There is no diff in index and event time In s... See more...
We have setup one alert which should trigger for every 1 hour When we run the alert query it is showing up the results but we did not received mail There is no diff in index and event time In scheduler logs it is showing status as success but i don't see python logs and alert did not get fired   What could be the issue for not receiving the mail from alert.
Dear Support, I have 2 indexes (indexA,  indexB) and one receiving server with 2 different ports (10.10.10.10:xx, 10.10.10.10:yy). I need my indexer to forward indexA to 10.10.10.10:xx and indexB t... See more...
Dear Support, I have 2 indexes (indexA,  indexB) and one receiving server with 2 different ports (10.10.10.10:xx, 10.10.10.10:yy). I need my indexer to forward indexA to 10.10.10.10:xx and indexB to 10.10.10.10:yy. What is best way to achieve it? I did two different apps with outputs, props, transforms and it does not work. I tried one app with LB and it does not work either. Example of outputs.conf: [tcpout] defaultGroup = group1, group2 [tcpout:group1] server = 10.10.10.10:xx forwardedindex. = ??? [tcpout:group2] server = 10.10.10.10:yy forwardedindex. = ???   Is it a good way to do it? How should forwardedindexes config look like ? What about props and transforms?   I would appreciate any help.   thanks pawel
Hello I want to find in subsearch autonomous_system for the IP address which I provided (in this example for 1.1.1.1) . Next, based on the name of the autonomous_system returned from subsearch, I... See more...
Hello I want to find in subsearch autonomous_system for the IP address which I provided (in this example for 1.1.1.1) . Next, based on the name of the autonomous_system returned from subsearch, I want to find all IP addresses connecting to my network that belongs to that autonomous_system.  For now I have something like that: index=firewall src_ip=* | lookup asn ip as src_ip [search index=firewall  src_ip=1.1.1.1 | fields src_ip | lookup asn ip as src_ip | rename autonomous_system AS subsearch_autonomous_system | dedup subsearch_autonomous_system] | stats values(src_ip) by subsearch_autonomous_system But when I run this search I got error: Error in 'lookup' command: Cannot find the source field '(' in the lookup table 'asn'. Can anyone help me with that? Regards Daniel
I have use case to use the ML feature to detect  the  anamoly in comm sent from each ID. I was trying to get the same from predict function, but there is multiple ID's and I can't set an alert/repor... See more...
I have use case to use the ML feature to detect  the  anamoly in comm sent from each ID. I was trying to get the same from predict function, but there is multiple ID's and I can't set an alert/report individually for all ID's. How I can use the same, Please help. Query which I am trying: index=indexhc source=hcdriver sourcetype="assembly" appname="marketing" ID IN (abc,xyz,qtr,jyk,klo,mno,ghr)  | timechart span=1d count as commSent by ID | predict commSent as predicted_commSent algorithm=LLP holdback=0 future_timespan=24 | eval anamoly_score=if(isnull(predicted_commSent),0,abs(commSent - predicted_commSent)) |table _time,ID,commSent,predicted_commSent,anamoly_score Above query is not giving any output,it seems predict command doesnot work with multiple columns. Please suggest.
Hi. I've tried to get Splunk to understand syslog messages coming from a Cisco Mobility Express setup. Mobility Express (ME) is the built-in controller solution into, in this setup, 3 AP3802I acces... See more...
Hi. I've tried to get Splunk to understand syslog messages coming from a Cisco Mobility Express setup. Mobility Express (ME) is the built-in controller solution into, in this setup, 3 AP3802I access points running 8.10.171.0 I have been successful at getting and displaying data from a C2960L-8PS switch running IOS 15. But not from any access point (AP). I've setup syslogging from the ME directly to a single instance Splunk demo lab running on Ubuntu with rsyslog. I can see data being logged into /data/syslog/192.168.40.20/ -rw-r--r-- 1 syslog syslog 9690 Sep 4 15:54 20230904-15.log -rw-r--r-- 1 syslog syslog 41100 Sep 4 16:58 20230904-16.log -rw-r--r-- 1 syslog syslog 9192 Sep 4 17:53 20230904-17.log Example of syslog messages are: 2023-08-29T05:48:04.090627+00:00 <133>SampleSite: *emWeb: Aug 29 07:48:03.431: %AAA-5-AAA_AUTH_ADMIN_USER: aaa.c:3334 Authentication succeeded for admin user 'example' on 100.40.168.192 2023-09-04T17:01:52.684140+02:00 <44>SampleSite: *apfMsConnTask_0: Sep 04 17:01 :52.495: %APF-4-PROC_ACTION_FAILED: apf_80211k.c:825 Could not process 802.11 Ac tion. Received RM 11K Action frame through incorrect AP from mobile station. Mob ile:1A:4A:FA:F9:BA:C6. 2023-09-04T17:01:52.718781+02:00 <44>SampleSite: *Dot1x_NW_MsgTask_0: Sep 04 17 :01:52.530: %LOG-4-Q_IND: apf_80211k.c:825 Could not process 802.11 Action. Rece ived RM 11K Action frame through incorrect AP from mobile station. Mobile:1A:4A: FA:F9:BA:C6. I've installed TA-cisco_ios from Splunkbase. In the top of my etc/apps/search/local/inputs.conf I've added: [monitor:///data/syslog/udp/192.168.40.20] disabled = false host = ciscome.example.net sourcetype = cisco:wlc #sourcetype = cisco:ap index = default For switches cisco:ios works fine, but I cannot get cisco:wlc or cisco:ap to process data it seems. Has anyone used Cisco Mobility Express with Splunk and gotten anything usefull out of the logs? Am I doing it right? Thanks for any tips.
Hi, I want to create a table in the below format and provide the count for them. I have multiple fields in my index and I want to create a table(similar to a excel pivot) using three fields App ... See more...
Hi, I want to create a table in the below format and provide the count for them. I have multiple fields in my index and I want to create a table(similar to a excel pivot) using three fields App Name, Response code and Method  index=abcd  | chart count  over App Name by Response code  --> Above works for me but I can create a table only using 2 fields.  How to create a table something as below format  with 3 fields or more than 3. Please could you help.  APP NAME RESPONSECODE RESPONSECODE RESPONSECODE 200 400 400 GET POST PATCH GET POST PATCH GET POST PATCH APP1                   APP2                   APP3                   APP4                   APP5                   APP6                  
Hi, How can I determine the index responsible for the majority of Splunk license consumption when analyzing security data in ES ?
I want to calculate the error count from the logs . But the error are of two times which can be distinguish only from the flow end event. i.e [ flow ended put :sync\C2V] So what condition I can ... See more...
I want to calculate the error count from the logs . But the error are of two times which can be distinguish only from the flow end event. i.e [ flow ended put :sync\C2V] So what condition I can put so that I can get this information from the above given log.    index=us_whcrm source=MuleUSAppLogs sourcetype= "bmw-crm-wh-xl-retail-amer-prd-api" ((severity=ERROR "Transatcion") OR (severity=INFO "Received Payload")) I am using this query to get below logs. Now I want a condition that when it is severity=error then I can get the severity= info event of received payload to get the details of the correlationId and also end flow event so that I can determine the error type.    
Any idea on how to configure for total calls per 1 Hour's & total calls per 24 Hours App-D metrics. Please help me here.
Hello to everyone! I have an UF installed on a MS file server Our Unified Communications Manager sends CDR and CMR files to this file server via SFTP Often enough, I see error messages, as you see... See more...
Hello to everyone! I have an UF installed on a MS file server Our Unified Communications Manager sends CDR and CMR files to this file server via SFTP Often enough, I see error messages, as you see in the screenshot (UF cannot read the file) The most strange thing is that all information from such files is successfully read What is wrong with my UF settings? Or maybe this is not UF? props.conf [ucm_file_cdr] SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS = csv TIMESTAMP_FIELDS = dateTimeOrigination BREAK_ONLY_BEFORE_DATE = False MAX_TIMESTAMP_LOOKAHEAD = 60 initCrcLength = 1500 ANNOTATE_PUNCT = false TRANSFORMS-no_column_headers = no_column_headers [ucm_file_cmr] SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS=csv TIMESTAMP_FIELDS = dateTimeOrigination BREAK_ONLY_BEFORE_DATE = False MAX_TIMESTAMP_LOOKAHEAD = 13 initCrcLength = 1000 ANNOTATE_PUNCT = false TRANSFORMS-no_column_headers = no_column_headers   transforms.conf [no_column_headers] REGEX = ^INTEGER\,INTEGER\,INTEGER.*$ DEST_KEY = queue FORMAT = nullQueue  
I have a Splunk alert where I specify the fields using "| fields ErrorType host UserAgent Country IP_Addr" and I want to receive this column order in SOAR platform. When I look at the JSON results an... See more...
I have a Splunk alert where I specify the fields using "| fields ErrorType host UserAgent Country IP_Addr" and I want to receive this column order in SOAR platform. When I look at the JSON results and UI from SOAR, the column order has changed to host, Country, IP_Addr, ErrorType and UserAgent (not the expected results).  I think this has to do with the REST call and Json data, but I would like to check if there is any quick fix we could do from splunk or SOAR side to show the proper order of columns.  Any help on this will be much appreciated. 
I have a question about security advisory SVD-2023-0805. It states only Splunk Web is affected, but the description clearly mentions the issue is caused by how OpenSSL is built, which is a very gener... See more...
I have a question about security advisory SVD-2023-0805. It states only Splunk Web is affected, but the description clearly mentions the issue is caused by how OpenSSL is built, which is a very generic library. For this reason I would like to check if indeed only Splunk Web is affected, or that Splunk installations on Windows in general are affected. I can imagine that OpenSSL is also used when a SSL/TLS connection is made from a forwarder to an indexer. This leads to the question: are universal forwarders on Windows also affected by this security advisory, even when Splunk Web is disabled?
Hello, Currently we have an NFS drive which is mounted on /opt/archive directory Splunk indexer installation is in Red hat We plan to change the remote storage IP address Current entry in /... See more...
Hello, Currently we have an NFS drive which is mounted on /opt/archive directory Splunk indexer installation is in Red hat We plan to change the remote storage IP address Current entry in /etc/fstab 192.168.24.1:/opt      /opt/archive     nfs    vers=4,rw,intr,nosuid  0  0 1. Before un-mounting is it required to stop rolling of cold buckets to frozen? how to stop this roll? 2. After mounting the new remote drive for frozen buckets Is there a way to verify that frozen directory is receiving from cold
Hello, I've been working with AppDynamics for some time now, and I'm looking to enhance our monitoring and analytics capabilities by integrating it with Splunk. I believe this integration can offer ... See more...
Hello, I've been working with AppDynamics for some time now, and I'm looking to enhance our monitoring and analytics capabilities by integrating it with Splunk. I believe this integration can offer a wealth of insights. Has anyone here successfully integrated AppDynamics with Splunk? I'm particularly interested in hearing about any best practices, challenges you've encountered, and the impact it has had on your application monitoring and troubleshooting efforts. Additionally, if anyone has pursued the Splunk Certification or is familiar with the certification process, could you share your experiences and any specific aspects of Splunk that you found especially relevant in the context of AppDynamics integration? I also Check this https://splunkbase.splunk.com/app/4315#:~:text=StreamWeaver%20makes%20integrating%20your%20AppDynamics,end%20observability%20and%20AIOps%20goals. Thanks in advance!
I have a alert which is running to find few values and i need to write the result of the alert to new index which has created. I have used alert action as log event and mentioned the new index whic... See more...
I have a alert which is running to find few values and i need to write the result of the alert to new index which has created. I have used alert action as log event and mentioned the new index which has created to write the output of the alert. but the output is getting ingested to new index but when i tried with main (default index) the output of the alert is getting ingested. the newly created index is working, i tried checking ingesting other data manually with files. so, what could be the issue , the alert results are not getting ingested to newly created index.
Hi , I am tryign to login to Search head server. It gives me error  500 Internal Server Error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request.... See more...
Hi , I am tryign to login to Search head server. It gives me error  500 Internal Server Error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.   If i put wrong password it gives wrong password error. so looks like this is not related to authentication.
Network - vulnerabilities detected on switches not resolved over a month
Configured Field is not showing in interesting field. Getting ;;;;;;;;;;;;; value after searching with index="Index Name " sourcetype=*  
HI team,   I need to extract the new fields by using rex for below raw data  1.ResponseCode 2.url message: INFO [nio-8443-exce-8] b. b. b.filter.loggingvontextfilter c.c.c.c.l.cc.f.loggingc... See more...
HI team,   I need to extract the new fields by using rex for below raw data  1.ResponseCode 2.url message: INFO [nio-8443-exce-8] b. b. b.filter.loggingvontextfilter c.c.c.c.l.cc.f.loggingcintextfil=ter.post process(Loggingcintextfilter.java"201)-PUT/actatarr/halt/liveness||||||||||||METRIC|--|Responsecode=400|Response Time=0