All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In addition to what @dural_yyz said, you can restrict team1 from accessing the dedicated search head by not creating the users for team1 on the new instance. And make sure to provide appropriate capa... See more...
In addition to what @dural_yyz said, you can restrict team1 from accessing the dedicated search head by not creating the users for team1 on the new instance. And make sure to provide appropriate capabilities to team2 so that they're not able to modify or search the indexes belonging to team1   Thanks, Tejas
Hey @kirk_in_porto , These are the minimum required hardware specifications. It is not that Splunk won't work with less number of cores or RAM, but it'll affect the performance. You might not get ef... See more...
Hey @kirk_in_porto , These are the minimum required hardware specifications. It is not that Splunk won't work with less number of cores or RAM, but it'll affect the performance. You might not get efficient output from less cores if you intend to run more number of searches or processes through the license manager instance. It'll still try to function as much as it can. However, just for training purpose, it is okay to have less number of cores/RAM associated to an instance.   Thanks, Tejas.
App 'Infoblox DDI' started successfully (id: 1725978494606) on asset: 'infoblox-enterprise'(id: 25) Loaded action execution configuration Logging into device Configured URL: https://10.247.53.30 ... See more...
App 'Infoblox DDI' started successfully (id: 1725978494606) on asset: 'infoblox-enterprise'(id: 25) Loaded action execution configuration Logging into device Configured URL: https://10.247.53.30 Querying endpoint '/?_schema' to validate credentials Connectivity test succeeded Exception Occurred. 'str' object has no attribute 'formate'. Traceback (most recent call last): File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 349, in _make_rest_call content_type = request_obj.headers[consts.INFOBLOX_JSON_CONTENT_TYPE] File "/opt/phantom/data/usr/python39/lib/python3.9/site-packages/requests/structures.py", line 52, in __getitem__ return self._store[key.lower()][1] KeyError: 'content-type' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "lib3/phantom/base_connector.py/base_connector.py", line 3204, in _handle_action File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 1173, in finalize return self._logout() File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 444, in _logout status, response = self._make_rest_call(consts.INFOBLOX_LOGOUT, action_result) File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 357, in _make_rest_call self.debug_print("{}. {}".formate(message, error_message)) AttributeError: 'str' object has no attribute 'formate' Connectivity test succeeded
Hi @kirk_in_porto , are you speaking of a lab or a production environment? if it's for a production environment, don't use less than the required resources, and if you can give more of them! Event... See more...
Hi @kirk_in_porto , are you speaking of a lab or a production environment? if it's for a production environment, don't use less than the required resources, and if you can give more of them! Eventually, you can have less resources for the management servers, but not for SHs and IDXs. If instead you're speaking of a lab, you can use all the resources you have and it runs. I created a lab on my workstation (i7 with 16 vcpu and 32 GB of RAM) with six Splunk servers: 2 IDXs, 3 SHs, 1 management server (CM, SHC-D and DS), each one with one vcpu and 4 GB RAM. Obviously I have few data and I have very slow results, but it runs. Ciao. Giuseppe
| stats values(Sockets) as Sockets by IP Hostname ID | eval Sockets=mvjoin(Sockets, ",")
Dear community, it might be an odd question but i need to forward the splunkd.log to a foreign syslog server, therefore i was following the sample from here: https://docs.splunk.com/Documentation/... See more...
Dear community, it might be an odd question but i need to forward the splunkd.log to a foreign syslog server, therefore i was following the sample from here: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Forwarding/Forwarddatatothird-partysystemsd So far i have configured the forwarder to forward testing.log (should be splunkd.log later) to the foreign syslog target     #inputs.conf [monitor:///opt/splunk/var/log/splunk/testing.log] disabled=false sourcetype=testing         #outputs.conf [tcpout] defaultGroup=idx-cluster indexAndForward=false [tcpout:idx-cluster] server=splunk-idx-cluster-indexer-service:9997 [syslog:my_syslog_group] server = my-syslog-server.foo:514       #transforms.conf [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group     So far so good, testing.log appears on the syslog server but not just that, all other messages are forwarded too. Question: How can i configure the (heavy) forwarder to only send testing.log to the foreign syslog server and how can i make sure that testing.log does not getting indexed? In other words - testing.log should only be send to syslog. Many thanks in advance    
Hi, My current data looks like  IP Hostname ID Sockets 1.1.1.1. Apple 100 404 1.1.1.1. Apple 100 22 2.2.2.2. Banana 99 404 3.3.3.3 Grapes 98 404 So only beca... See more...
Hi, My current data looks like  IP Hostname ID Sockets 1.1.1.1. Apple 100 404 1.1.1.1. Apple 100 22 2.2.2.2. Banana 99 404 3.3.3.3 Grapes 98 404 So only because for the 2nd row socket is 22 its creating another row , what I want is if the first 3 columns are same then it can merge the socket field value like IP Hostname ID Sockets 1.1.1.1. Apple 100 404,22 2.2.2.2. Banana 99 404 3.3.3.3 Grapes 98 404
Please provide some anonymised sample events for both searches and what your expected output would look like
Hi @Dayalss , sorry but it isn't clear, could yuou share some sample of the normal condition (field1, field2 and field3 different), and the condition with field1, field2 and field3 the same? Ciao. ... See more...
Hi @Dayalss , sorry but it isn't clear, could yuou share some sample of the normal condition (field1, field2 and field3 different), and the condition with field1, field2 and field3 the same? Ciao. Giuseppe
Please give an example of your expected output for when the fields are the same and for when they are not the same.
The output is numerical with the inner search query. To validate this output, the next step is to check the p90 latencies in Splunk Observability Cloud for these traces and compare the values. Thank ... See more...
The output is numerical with the inner search query. To validate this output, the next step is to check the p90 latencies in Splunk Observability Cloud for these traces and compare the values. Thank you.
Splunk docs show all deployment components needing a minimum of x64, 12 cores, 12GB, 2GHZ My question is for a dedicated license server for a VERY small distributed system for training and developme... See more...
Splunk docs show all deployment components needing a minimum of x64, 12 cores, 12GB, 2GHZ My question is for a dedicated license server for a VERY small distributed system for training and development. I want a search head, and indexer and then separate LM, and DS.  The data volume is small, less than 2GB/day. Do I really need the full blown minimums for an LM that will have a single Dev License?  I wanted to put this onto an RPi, but ...... yeh ..... doesn't look like an option. I have a couple of low end NUC's that will be x64, but won't meet the minimums for cores or RAM. Would welcome any assistance or even mentoring on this project.
Hi Giuseppe, I did exactly what you said. but no luck! In another try, I even created a search and saved it as an alert, named it "rule-4444" then added a notable to it as an action. it appeared a... See more...
Hi Giuseppe, I did exactly what you said. but no luck! In another try, I even created a search and saved it as an alert, named it "rule-4444" then added a notable to it as an action. it appeared as "rule-4444" in the "Top Notable Events" in the Security Posture page. but when i click on it, it is redirected to incident review page but again all incidents listed. the same thing as ravida says happening. when u first click on it, you can see the notable name in the URL after (incident review page )"/incident_review?form.rule_name=rule-4444" followed by earliest/latest timestamps but after a while when the page load completes it disappears and is replaced with a new URL which only has the earliest/latest values
Hi, How can I combine a field value , if the other 3 field values are the same Ex:- If the field1 , field2 , field3 are same but the field4 is different and its creating a new row in my splunk ta... See more...
Hi, How can I combine a field value , if the other 3 field values are the same Ex:- If the field1 , field2 , field3 are same but the field4 is different and its creating a new row in my splunk table, I want to merge or combine the field4 values into one field value separated by commas if the field1 , field2 , field3 are same  
Look at cloning the default 'admin' role to a new role named anything such as 'team2admin'.  Then you can remove the permissions for things like: - add/modify roles - add/modify search index or inh... See more...
Look at cloning the default 'admin' role to a new role named anything such as 'team2admin'.  Then you can remove the permissions for things like: - add/modify roles - add/modify search index or inherited search index - many others you would want to review and confirm. What you want to do is not impossible but from a security point of view near impossible to audit and ensure team 2 is always restricted from accessing the indexes in question.  Additionally moving forward any permissive capabilities from 'admin' wouldn't carry forward to the cloned role so for every upgrade I would recommend an audit by proper admins.
Hi Team,  I am facing the below error while testing in my local SPLUNK web v9 while connecting with Chronicle Instance. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed cer... See more...
Hi Team,  I am facing the below error while testing in my local SPLUNK web v9 while connecting with Chronicle Instance. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106) I have created a python app to upload it in Splunk.  Have created a request_fn where below line of code is being executed - requests.get(host + url, verify=False, **kwargs) I made sure that SSL verification is disabled in Python code (above verify=False) and also I have disabled it from splunk settings - Server Settings > General > Https SSL set to NO  Enable SSL (HTTPS) in Splunk Web? - NO   also Have checked the webconf file where SSL is set to 0 (no) [settings] enableSplunkWebSSL = 0 But still when my SPLUNK LOCAL WEB is trying to make the http request it is giving SSL error -  [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106) Does anybody has any clue or faced the same issue ?
Had this issue with a different add-on.  When packaging an app/add-on using tar, use the following:   COPYFILE_DISABLE=1 tar --format ustar -cvzf <appname>.tar.gz <appname_directory>   You'll fi... See more...
Had this issue with a different add-on.  When packaging an app/add-on using tar, use the following:   COPYFILE_DISABLE=1 tar --format ustar -cvzf <appname>.tar.gz <appname_directory>   You'll find this in the documentation here.  
Hi @KhalidAlharthi , this issue appears when a peer is disconnected of a time from the Cluster Master (in my project it happend during a Disaster Recovery test). Sometimes one server has rhis issue... See more...
Hi @KhalidAlharthi , this issue appears when a peer is disconnected of a time from the Cluster Master (in my project it happend during a Disaster Recovery test). Sometimes one server has rhis issue but usually, if you give it more time it rebalances the data and the issue disappears, otherwise, you can force the situation with a rolling restart. Ciao. Giuseppe
I'm seeing this same behavior since upgrade of Splunk HF to version 9.2.2. There is a server that has been retired, usually I would delete the record, and if that system comes back online for any rea... See more...
I'm seeing this same behavior since upgrade of Splunk HF to version 9.2.2. There is a server that has been retired, usually I would delete the record, and if that system comes back online for any reason it would show back up. Is there another way to remove, or will it drop off over time? Kevin
Hi All, Hope you all are doing well. I am very new to Splunk Enterprise security, and i need your help  to understand how i can create a reverse integration with ServiceNow. So we are using ... See more...
Hi All, Hope you all are doing well. I am very new to Splunk Enterprise security, and i need your help  to understand how i can create a reverse integration with ServiceNow. So we are using ServiceNow Security Operation Integration to manually create incidents in ServiceNow for notables. We have a new ask from SOC to update the notables when the incidents are being created and closed in ServiceNow. We are using Splunk enterprise and wanted to know what endpoints we need to provide so that we can achieve reverse communication. I have created a user in splunk who has access to edit notables but i am not sure what endpoint i need to provide, is it just the url of my instance or do i need to add any services as well. Please let me know if you have any other questions. Thanks in advance.