All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Evening All, Have been working on setting up a Taxii feed pulling observables in from CISA/DHS however seem to be encountering the following error message which looks like an SSL error: ssl.SSLErro... See more...
Evening All, Have been working on setting up a Taxii feed pulling observables in from CISA/DHS however seem to be encountering the following error message which looks like an SSL error: ssl.SSLError: [SSL] PEM lib (_ssl.c:3954) I've been digging around but cant seem to find much on this exact error code. Cert and Key files  are defined correctly as we use those same cert/key files in a separate technology "MineMeld" which is working as expected. Those files are uploaded into the credential manager and documentation followed under the  https://docs.splunk.com/Documentation/ES/6.5.0/Admin/Downloadthreatfeed link. 2021-05-04 19:38:06,931+0000 ERROR pid=16982 tid=MainThread file=threatlist.py:download_taxii:473 | [SSL] PEM lib (_ssl.c:3954) Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/threatlist.py", line 436, in download_taxii taxii_message = handler.run(args, handler_args) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/taxii_client/__init__.py", line 171, in run return self._poll_taxii_11(parsed_args) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/taxii_client/__init__.py", line 81, in _poll_taxii_11 http_resp = client.call_taxii_service2(args.get('url'), args.get('service'), tm11.VID_TAXII_XML_11, poll_xml, port=args.get('port'), timeout=args['timeout']) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/contrib/libtaxii/clients.py", line 344, in call_taxii_service2 response = urllib.request.urlopen(req, timeout=timeout) File "/opt/splunk/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/opt/splunk/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/opt/splunk/lib/python3.7/urllib/request.py", line 543, in _open '_open', req) File "/opt/splunk/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/contrib/libtaxii/clients.py", line 374, in https_open return self.do_open(self.get_connection, req) File "/opt/splunk/lib/python3.7/urllib/request.py", line 1318, in do_open h = http_class(host, timeout=req.timeout, **http_conn_args) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/contrib/libtaxii/clients.py", line 382, in get_connection key_password=self.key_password) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/contrib/libtaxii/clients.py", line 437, in __init__ cert_file, key_file, password=key_password) ssl.SSLError: [SSL] PEM lib (_ssl.c:3954) Any thoughts on what this could be at all?   Cheers, Tom
Hey Splunksters, I have an Azure VM that I put a forwarder on that is supposed to reach out to my on-prem deployment server (which I have successfully done in my separate development environment).... See more...
Hey Splunksters, I have an Azure VM that I put a forwarder on that is supposed to reach out to my on-prem deployment server (which I have successfully done in my separate development environment).   (just a little backstory in case this helps anyone) Originally it was installed using a script that pointed to the wrong Deployment server, so I uninstalled the forwarder - then reinstalled it pointing to the correct deployment server  I checked the deploymentclient.conf and it is pointed correctly - restarted the service etc etc. - But none of the deployment apps are showing up on the client - and the DS GUI does not show the the client server in forwarding management Ran a test-netconnection from the Azuer machine to the DS using the DS management port - and got TCP success However, when I run test connection on the DS back to the Azure machine it fails -- Which would lead me to believe there is a firewall or isn't set to bi-directional port etc etc etc. HOWEVER, my DEV setup gives me the exact same test connection failure going    FROM:   the DS  TO:  the Azure client machine ----BUT IT STILL WORKS (data comes in, apps deploy , etc in DEV ) ---so I'm confused That's why I feel like I am taking crazy pills.  Considering doing a full reboot of the client machine, but that is not optimal at the moment.  Thanks!
I have one scheduled report which will provide below table results in email. Requirement is to color the 'Validation Result' like below. But even after seting up these values in 'Advanced edit' ... See more...
I have one scheduled report which will provide below table results in email. Requirement is to color the 'Validation Result' like below. But even after seting up these values in 'Advanced edit' option of report, it's not giving colored results. Any suggestion to get this done ?
Our sendemail function seems to have stopped working or is only working sporadically as it might send email the next day for an issue that occurred the day before or it may not send anything at all. ... See more...
Our sendemail function seems to have stopped working or is only working sporadically as it might send email the next day for an issue that occurred the day before or it may not send anything at all.  Has anyone seen this before?  We did in the past and edited the sendemail.py for that one, but this issue is brand new and we're seeing it in our 7.2.6 environment and in our 8.1.2 environment.  We are a clustered environment.
Hey Splunk Friends,   I currently have 32 indexes spread across 2 peers managed by 1 master.  The total space for these indexes has now reached just under 3,000Gb (one of the indexes alone is 1,486... See more...
Hey Splunk Friends,   I currently have 32 indexes spread across 2 peers managed by 1 master.  The total space for these indexes has now reached just under 3,000Gb (one of the indexes alone is 1,486Gb). We don't really have any performance issues at present, but when the Splunk machines get restarted for any reason, it does take some time for the Indexes to catch up (Replication Factor, Search, etc).  On the odd occasion, if there has been an issue which lasted longer, it has caused us to see bucket issues. My question, is 32 indexes (3000Gb) too much for one cluster (two peers)? If so, should I create another cluster? Or add additional peers?
We've set up the Microsoft Teams add on and have it working for one client. We were wondering can the same webhook can be used to connect to multiple tenants or if we'll need to create a new webhook ... See more...
We've set up the Microsoft Teams add on and have it working for one client. We were wondering can the same webhook can be used to connect to multiple tenants or if we'll need to create a new webhook per tenant?  
I have a client that has Splunk deployed on their business network, and they would like to ingest data from an isolated network. The isolated network can send data outbound, but not receive inbound c... See more...
I have a client that has Splunk deployed on their business network, and they would like to ingest data from an isolated network. The isolated network can send data outbound, but not receive inbound connections/communications. My recommendation was to collect the data on the isolated network and push it out to Splunk via one way file transfer. Then have Splunk monitor the file(s).  Has anyone else run into this scenario? If so, how did you architect a solution and was it successful? My assumption is that Universal Forwarders will not work in this situation because they require two way communication via TCP. Thank you
Hello, I'm trying to show this event as a table:     2021-05-04 11:28:56.722, TIME="2021-05-04 11:28:56.722", ID="0a7a270b79341ba28179372363920a5d", CREATED="1620127736722", SOURCE="Group Aggrega... See more...
Hello, I'm trying to show this event as a table:     2021-05-04 11:28:56.722, TIME="2021-05-04 11:28:56.722", ID="0a7a270b79341ba28179372363920a5d", CREATED="1620127736722", SOURCE="Group Aggregation", ACTION="entitlement_attribute_change", TARGET="CN=g-cvi_admin_test,OU=CVI,OU=Security,OU=Control Groups,DC=base,DC=dev", APPLICATION="AD Base Direct", ACCOUNT_NAME="memberOf", INSTANCE="003608aa42a7425793ea73cc7f9f8e65", ATTRIBUTE_NAME="msDS-PrincipalName", ATTRIBUTE_VALUE="BASEDEV\g-cvi_admin_test", ATTRIBUTES="<Attributes> <Map> <entry key="attributeName" value="msDS-PrincipalName"/> <entry key="newValue" value="BASEDEV\g-cvi_admin_test"/> <entry key="oldValue" value="BASEINT\g-cvi_admin_test"/> </Map> </Attributes> ", STRING1="Change of group of value CN=g-cvi_admin_test,OU=CVI,OU=Security,OU=Control Groups,DC=base,DC=dev on AD Base Direct", STRING2="BASEINT\g-cvi_admin_test", STRING3="group"     I have all the fields extracted correctly  even the ATTRIBUTES:     <Attributes> <Map> <entry key="attributeName" value="msDS-PrincipalName"/> <entry key="newValue" value="BASEDEV\g-cvi_admin_test"/> <entry key="oldValue" value="BASEINT\g-cvi_admin_test"/> </Map> </Attributes>     From this ATTRIBUTE field, thanks to:     |rex max_match=0 field=ATTRIBUTES "<entry key=\"(?<key_xml>[a-zA-Z0-9_]+?)\" value=\"(?<value_xml>[\s\S]+?)(?:\"\/>)"     From that I'm getting key_xml and value_xml as multivalues. I would like to have key_xml as column names and value_xml as row cells of the corresponding keys. Thanks to whomever can help me  
Hi, I am trying to extract the following  [04 May 2021 13:13:59,786] [Nsh-Proxy-Thread-93] [INFO] [abc@abc.com:abc:10.123.123.123] [BLSSOPROXY] Connected to sm478383922 with a socket descriptor 304 ... See more...
Hi, I am trying to extract the following  [04 May 2021 13:13:59,786] [Nsh-Proxy-Thread-93] [INFO] [abc@abc.com:abc:10.123.123.123] [BLSSOPROXY] Connected to sm478383922 with a socket descriptor 304 I want to extract the "[abc@abc.com:abc:10.123.123.123]" and destination "sm478383922" put it in a table form Username Group Src Ip Destination abc@abc.com abc 10.123.123.123 sm478383922   can you guys help me how to achieve this in splunk search?
I am trying to get an alert to recognize a lookup file with a whitelist for external devices.  Some devices I don't care to see where others I do.  I only want the alert to trigger when the whitelist... See more...
I am trying to get an alert to recognize a lookup file with a whitelist for external devices.  Some devices I don't care to see where others I do.  I only want the alert to trigger when the whitelist is set to 0 and based on the search field of Device_ID.  For unknown reasons though the alert still triggers despite the settings.  I am also using an asterisk for my Device_IDs and have updated the lookup definition using WILDCARD(Make_Model).  My search mode is set to Fast Mode and I have tried the others as well.  I am manually populating the lookup file. index=xxxx EventCode=6416 NOT Device_ID IN(SWD*,DISPLAY*) | lookup pnp Make_Model as Device_ID | search NOT WhiteList=1 "pnp" is the name of my lookup definition.  The csv file was imported into splunk enterprise and appears under lookup table files. Appreciate any recommendations or other suggestions on how to improve this search and lookup file.
Hi, I am using MLTK's DensityFunction on my datamodel fields, I want to use Partial_Fit=true. But Im getting below error : "Error in 'fit' command: Algorithm "DensityFunction" does not support... See more...
Hi, I am using MLTK's DensityFunction on my datamodel fields, I want to use Partial_Fit=true. But Im getting below error : "Error in 'fit' command: Algorithm "DensityFunction" does not support partial fit" In splunk docs it is stated that DensityFunction supports partial_fit link   My query : "| some tstats....... | `drop_dm_object_name(Authentication)` | fit DensityFunction "activity_count" by "vendor_account" dist=auto show_density=true partial_fit=true into vendor_account_auth"
Hi, I'm trying to install Splunk Stream in a distributed environment but the more I read the more confused I'm getting and less I understand! I have a distribution server deploying the Stream_TA_st... See more...
Hi, I'm trying to install Splunk Stream in a distributed environment but the more I read the more confused I'm getting and less I understand! I have a distribution server deploying the Stream_TA_stream app to a universal forwarder on a Windows 10 PC (10.1.1.1) and this looks to be successful as I'm seeing the app along with my inputs.conf. Within my inputs.conf I have the splunk_stream_app_location set as http://10.1.1.11:8000/en-us/custom/splunk_app_stream/  10.1.1.11 is my Splunk Stream server with splunk_app_stream and splunk_TA_stream_wire_data installed. Under the Stream App config I have created a test stream looking for ICMP and under Distributed Forwarder Management a group with HEC off and an endpoint URL http://10.1.1.1.windows.uf:8088  In Matched Forwarders should I be seeing my Win10 PC 10.1.1.1 in the Preview of matched Forwarders? I have used Wireshark and can see comms on tcp port 8000 including the http 'ping' and an http 200 OK response but no other communications on any other port. Can anyone shed some light on what should happen next and how the Stream config is ‘shared’? Any help/advise would be greatly appreciated. Cheers. Paul.
Hi, I have a list of accounting codes in a lookup table. I use that to identify applications under that accounting code and . If the accounting code is not in the lookup table, it becomes "Others". ... See more...
Hi, I have a list of accounting codes in a lookup table. I use that to identify applications under that accounting code and . If the accounting code is not in the lookup table, it becomes "Others". How do I generate a report wherein the results will only list those "Others" and showing the accounting code used?   Thanks and Regards,
Hi, I have a table like that :  test total productA_xxxx productA_zzzz productB_xxxx productB_zzzz 1 22 0.23 0.36 0.44 0.55   What I want is a table like that : test total... See more...
Hi, I have a table like that :  test total productA_xxxx productA_zzzz productB_xxxx productB_zzzz 1 22 0.23 0.36 0.44 0.55   What I want is a table like that : test total object xxxx zzzz 1 22 productA 0.23 0.36 1 22 productB 0.44 0.55   How can I extract "product" from the name of the field ?  Can you help me please ? Thank you !
I am just trying to build a dashboard and making it all pretty for management. What I want to be able to do is compare the last two scans and get a difference between the total vulnerability of this ... See more...
I am just trying to build a dashboard and making it all pretty for management. What I want to be able to do is compare the last two scans and get a difference between the total vulnerability of this week's scan and the last one and to know how many vulnerabilities remediated or not remediated.
Hi there can someone please help.   I am using the free trial version of Splunk Enterprise. I have set up a Data Input (Monitor) on a folder containing three files, one for each month, Jan, Feb an... See more...
Hi there can someone please help.   I am using the free trial version of Splunk Enterprise. I have set up a Data Input (Monitor) on a folder containing three files, one for each month, Jan, Feb and Mar.  Feb and Mar load without any problems however Jan fails to load with the following error entry in the log files: 05-04-2021 12:17:37.361 +0100 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=C:\SplunkData\PI\POC_PI_Data_Jan.csv). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. I have tried deleteing the Data Input and related Index, recreating from scratch, without any luck.  I then deleted the content a second time, restarted the splunkd Windows Service, and created everything from scratch a third time, but just get the same error.  There does not appeat to be anything wrong with the CSV file, which is 916,686 rows long with 4 fields, (Tag, TimeStamp, Value, Status). Kind Regards Paul J.
Hi Team,  Kindly help me with the below scenario.   As shown in the diagram my query is index=main |stats count by log_level   I would like to show the background color for Error is red, Warn ... See more...
Hi Team,  Kindly help me with the below scenario.   As shown in the diagram my query is index=main |stats count by log_level   I would like to show the background color for Error is red, Warn is Amber, Info is green. The color shouldn't depend on the values. Whatever the count it does not matter. when the panels should always show red for ERROR Warn is Amber, Info is green   Kindly help me on the same    Thanks & Regards, Reddy  
Any suggestions on indexing GDPR(PCI/PII) data to Splunk and send protected reports to users. Also, if it is possible to prevent this data visibility from other Splunk users?  
Hi,  I am trying to configure the requirement to install the Exchange app (4.0.2) in my test environment (Splunk 8.1.3) and get the following error: So the AD addon is newer the expected?!? ... See more...
Hi,  I am trying to configure the requirement to install the Exchange app (4.0.2) in my test environment (Splunk 8.1.3) and get the following error: So the AD addon is newer the expected?!? I tried to bypass the requirements and it looks like Splunk gets data from Windows but not for Exchange/AD It is true that there are not many data for the last 24h as I just started everything but I would expect some data. The AD addon was configured well: Any suggestions? Thanks Christian
Hello,  i´m quite new to all this.  We aim to to integrate splunk to Remedy and later on ITSI/remedy to be able to create tickets etc.   I´ve been told from our Remedy expert that we can´t use the... See more...
Hello,  i´m quite new to all this.  We aim to to integrate splunk to Remedy and later on ITSI/remedy to be able to create tickets etc.   I´ve been told from our Remedy expert that we can´t use the Splunk Add-on for BMC Remedy because. "splunk addon seems to be built for ITSM which is a pre built module from BMC. We have a custom made" Anyone who has any opinions/input regarding his statement? If we´re not able to use the addon, any recommendations on how to proceed?   regards Jonte