All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ohbuckeyeio , I'm just curious; did you ever get a response from Splunk on this? I'm finding other problems with my users of splunk-utils and wonder whether more recent versions are safe to use o... See more...
Hi @ohbuckeyeio , I'm just curious; did you ever get a response from Splunk on this? I'm finding other problems with my users of splunk-utils and wonder whether more recent versions are safe to use or not.
You could set up some scheduled reports to run on partial sets of addresses, then load the results from the searches in your dashboard. This assumes you can work with out of date data e.g. your repor... See more...
You could set up some scheduled reports to run on partial sets of addresses, then load the results from the searches in your dashboard. This assumes you can work with out of date data e.g. your report is based on yesterday's data and you don't need the very latest data. Alternatively, as you said, you could "chain" your searches based on when a search completes, set a token which the next search is waiting for, and so on. (This is easier to do in SimpleXML, but still possible in Studio.)
Hi @ohbuckeyeio , I'm just curious; did you ever get a response from Splunk on this? I'm finding other problems with my users of splunk-utils and wonder whether more recent versions are safe to use o... See more...
Hi @ohbuckeyeio , I'm just curious; did you ever get a response from Splunk on this? I'm finding other problems with my users of splunk-utils and wonder whether more recent versions are safe to use or not.
If you are decommissioning indexers, they have to redistribute all the data on them to other peers in the cluster. If you try to take down several all at once, that process will likely break. Defini... See more...
If you are decommissioning indexers, they have to redistribute all the data on them to other peers in the cluster. If you try to take down several all at once, that process will likely break. Definitely decom them individually. Also, if you're using  ./splunk offline --enforce-counts DO NOT set maintenance mode The cluster cannot be in maintenance mode, because bucket fixup does not occur during maintenance mode. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Takeapeeroffline#The_enforce-counts_offline_process
Yes, trial instances do use self-signed certs. That's normal. If you buy a normal paid service you get certs from a well-known CA. And if you say "was" does it mean your trial has ended? That would ... See more...
Yes, trial instances do use self-signed certs. That's normal. If you buy a normal paid service you get certs from a well-known CA. And if you say "was" does it mean your trial has ended? That would mean that the environment has been turned off so tnere is nothing to connect to. So the connection timeout is a normal situation.
I would like to create a dashboard which would run a search daily to check network traffic against a list of about 18,000 IP address.  We created a lookup table with all the IP addresses and ran it,... See more...
I would like to create a dashboard which would run a search daily to check network traffic against a list of about 18,000 IP address.  We created a lookup table with all the IP addresses and ran it, but the search times out. Then we tried to split the lookup tables into 8 different tables and each table was a panel in our dashboard. A few dashboards will run when we do it this way, but then the rest time out.  An idea we had was to either create a drop down tab to only run the searches when we specify, or create a search that runs one lookup table and then will only start the next search when the other stops.  Is there a simpler way to do this? Ideally it would all be one search but it just seems to be too much for our resources.  
Take what was given previously and adjust with your additional fields you need carried through. Original Suggestion | stats values(Sockets) as Sockets by IP Hostname ID | eval Sockets=mvjoin(Socket... See more...
Take what was given previously and adjust with your additional fields you need carried through. Original Suggestion | stats values(Sockets) as Sockets by IP Hostname ID | eval Sockets=mvjoin(Sockets, ",") Extended Suggestion | stats values(x) as x, values(y) as y, values(Sockets) as Sockets by IP Hostname ID | eval Sockets=mvjoin(Sockets, ",") | table IP Hostname ID Sockets x y Extend as many fields that you want to carry forward and the table is only required if you wish to control the display order of the fields, completely skip otherwise.
I am an Admin
What is your role?  The indexes you can see are based on your role. Can you share the exact error message(s) so are seeing along with the query that caused?  That will help us find the source of the... See more...
What is your role?  The indexes you can see are based on your role. Can you share the exact error message(s) so are seeing along with the query that caused?  That will help us find the source of the problem.
I am trying to create use cases and searching the indexes but i get index search not found error message. All my logs are not showing up anywhere
Please provide a more complete representation of your data and your expected output - we can only work with what you show us.
Hi guys, Is there any documentation available out there to setup the Cisco Security Cloud app? Specific requirements, "failed to create an input" and similar errors etc. Qzy
Hello @pratrox, I believe this has been addressed in the latest version of Upgrade Readiness App. Just have the app installed on your environment and run the Python scan from the UI interface and it... See more...
Hello @pratrox, I believe this has been addressed in the latest version of Upgrade Readiness App. Just have the app installed on your environment and run the Python scan from the UI interface and it should display incompatible apps/add-ons. Splunkbase link - https://splunkbase.splunk.com/app/5483   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated!!
Hi , I have already tried this , but the issue is there are around 15+ fields which Im using in my complete table query  at last. I just want to merge only based on these 3 fields , but if I me... See more...
Hi , I have already tried this , but the issue is there are around 15+ fields which Im using in my complete table query  at last. I just want to merge only based on these 3 fields , but if I mention these fields in stats all other 12+ fields are getting empty values. Is there a way only it can check for those 3 fields and does not impact other field values
Hello @KhalidAlharthi , This could be indicative of underlying hardware problem as well. You can check for the same if the issue still persist after a rolling restart. Apart from connectivity issue ... See more...
Hello @KhalidAlharthi , This could be indicative of underlying hardware problem as well. You can check for the same if the issue still persist after a rolling restart. Apart from connectivity issue what other errors do you observe?   Thanks, Tejas.
In addition to what @dural_yyz said, you can restrict team1 from accessing the dedicated search head by not creating the users for team1 on the new instance. And make sure to provide appropriate capa... See more...
In addition to what @dural_yyz said, you can restrict team1 from accessing the dedicated search head by not creating the users for team1 on the new instance. And make sure to provide appropriate capabilities to team2 so that they're not able to modify or search the indexes belonging to team1   Thanks, Tejas
Hey @kirk_in_porto , These are the minimum required hardware specifications. It is not that Splunk won't work with less number of cores or RAM, but it'll affect the performance. You might not get ef... See more...
Hey @kirk_in_porto , These are the minimum required hardware specifications. It is not that Splunk won't work with less number of cores or RAM, but it'll affect the performance. You might not get efficient output from less cores if you intend to run more number of searches or processes through the license manager instance. It'll still try to function as much as it can. However, just for training purpose, it is okay to have less number of cores/RAM associated to an instance.   Thanks, Tejas.
App 'Infoblox DDI' started successfully (id: 1725978494606) on asset: 'infoblox-enterprise'(id: 25) Loaded action execution configuration Logging into device Configured URL: https://10.247.53.30 ... See more...
App 'Infoblox DDI' started successfully (id: 1725978494606) on asset: 'infoblox-enterprise'(id: 25) Loaded action execution configuration Logging into device Configured URL: https://10.247.53.30 Querying endpoint '/?_schema' to validate credentials Connectivity test succeeded Exception Occurred. 'str' object has no attribute 'formate'. Traceback (most recent call last): File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 349, in _make_rest_call content_type = request_obj.headers[consts.INFOBLOX_JSON_CONTENT_TYPE] File "/opt/phantom/data/usr/python39/lib/python3.9/site-packages/requests/structures.py", line 52, in __getitem__ return self._store[key.lower()][1] KeyError: 'content-type' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "lib3/phantom/base_connector.py/base_connector.py", line 3204, in _handle_action File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 1173, in finalize return self._logout() File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 444, in _logout status, response = self._make_rest_call(consts.INFOBLOX_LOGOUT, action_result) File "/opt/phantom/data/apps/infobloxddi_5ec38a6e-18c3-4cc3-ab47-2754b56aea50/infobloxddi_connector.py", line 357, in _make_rest_call self.debug_print("{}. {}".formate(message, error_message)) AttributeError: 'str' object has no attribute 'formate' Connectivity test succeeded
Hi @kirk_in_porto , are you speaking of a lab or a production environment? if it's for a production environment, don't use less than the required resources, and if you can give more of them! Event... See more...
Hi @kirk_in_porto , are you speaking of a lab or a production environment? if it's for a production environment, don't use less than the required resources, and if you can give more of them! Eventually, you can have less resources for the management servers, but not for SHs and IDXs. If instead you're speaking of a lab, you can use all the resources you have and it runs. I created a lab on my workstation (i7 with 16 vcpu and 32 GB of RAM) with six Splunk servers: 2 IDXs, 3 SHs, 1 management server (CM, SHC-D and DS), each one with one vcpu and 4 GB RAM. Obviously I have few data and I have very slow results, but it runs. Ciao. Giuseppe
| stats values(Sockets) as Sockets by IP Hostname ID | eval Sockets=mvjoin(Sockets, ",")