All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  ,   I believe using roles (creating a new one to run the saved search) might work.
Hi @timtekk , it's very strange because in this documentation https://docs.splunk.com/Documentation/Splunk/9.3.1/Updating/Useforwardermanagementtomanageclients#View_client_status (latest version), t... See more...
Hi @timtekk , it's very strange because in this documentation https://docs.splunk.com/Documentation/Splunk/9.3.1/Updating/Useforwardermanagementtomanageclients#View_client_status (latest version), this feature is still present. But many people in Community reported the same issu: https://community.splunk.com/t5/Deployment-Architecture/Unable-to-remove-records-from-the-Deployment-Server/m-p/698055 Open a case to Splunk Support, because there's a behavior different than documentation. Ciao. Giuseppe
Hi @waJesu , exactly define your requirement and match it to your fields, then it's easy to use commands. Ciao. Giuseppe
Hi @jaburke1 , I don't like automatic lookups! And I use them only when I must! Ciao. Giuseppe
Our Splunk Add-on app was created with python modules ( like cffi, cryptography and PyJWT) where these modules are placed under app /bin/lib folder..  this add-on is working as expected. When we try... See more...
Our Splunk Add-on app was created with python modules ( like cffi, cryptography and PyJWT) where these modules are placed under app /bin/lib folder..  this add-on is working as expected. When we try to upgrade Splunk Enterprise from 8.2.3  to 9.3,  our add-on is failing to load python modules and throwing error 'No module named '_cffi_backend'    Note: we are running on python 3.7. and updated Splunk python SD to latest 2.0.2
How do you get a Saved Search to ignore a specific automatic lookup? The reason for wanting to do this is because the lookup being used is very large and the enrichment is not needed for a specific ... See more...
How do you get a Saved Search to ignore a specific automatic lookup? The reason for wanting to do this is because the lookup being used is very large and the enrichment is not needed for a specific search. Using something like | fields - FieldA FieldB Did not not speed up the search (where FieldA and FieldB are fields that are matched on in the automatic lookup) When the automatic lookup has the permissions changed to just one app then the saved search runs very fast but I do not believe keeping it like that is an option. Ideally there would be an option that could be a setting just for this one saved search so that it would not know the automatic lookup exists. Thanks in advance for any suggestions.
HI @ITWhisperer  Can we have a line chart with  d X axis = _time  Y axis = column1   and value of count1 count2 count3 as 3 lines on the chart ?? 
Hi, I am just facing the same problem. Did you finally figured out any solution? I am dealing with this issue directly with tufin, hope to have an answer soon. I´ll come back if I have any update. 
Essentially, a line chart will be visualised from a table with the first column being the x-axis, normally a timestamp (_time), with the subsequent columns providing the values for the lines on the c... See more...
Essentially, a line chart will be visualised from a table with the first column being the x-axis, normally a timestamp (_time), with the subsequent columns providing the values for the lines on the chart. Your table does not match these criteria so you would not be able to represent your table as a line chart (without removing or combining some of your data.
Hi, figured out to get the week number based on the day number  Get_Week_Number=floor(tonumber(strftime(ToDateTime1, "%d"))/7)+1, also adjusted my preferences to the datetime to show eastern
Hi sainag, Thank you so much for your quick response. I was able to use your example and get it as follow - 2 things i noticed are 1 is the week number as 40 this should have been the october month... See more...
Hi sainag, Thank you so much for your quick response. I was able to use your example and get it as follow - 2 things i noticed are 1 is the week number as 40 this should have been the october month week number 2 is the time part - i have 08.48.12 which is EST - but in my results i see it as 07.48.12 ToDateTime1=strptime(TempDate1, "%a %d %b %Y %H:%M:%S:%3N %Z"), Get_Day_Name=strftime(ToDateTime1, "%A"), Get_Month_Num=strftime(ToDateTime1, "%d"), Get_Month_Name=strftime(ToDateTime1, "%b"), Get_Year=strftime(ToDateTime1, "%Y"), Get_Week_Number=strftime(ToDateTime1, "%U"), Get_Time_Part=strftime(ToDateTime1, "%H:%M:%S") Thanks a lot
Hi  Can you please help me to create multi line chart with the below data.  Data in the below format is fetched in SPlunk. I need to create a multi line chart with the same data as below:  Data : ... See more...
Hi  Can you please help me to create multi line chart with the below data.  Data in the below format is fetched in SPlunk. I need to create a multi line chart with the same data as below:  Data :  On the X axis : Time  Y axis : column1  Count1, count2 and count3 should be the 3 lines in the multi line chart.  Last command in the Splunk Query to fetch the data in the table form is below :  | table column1  column2  Time Count1 Count2 Count3  With this data can we create a multi linechart in SPlunk ?     
I started noticing this error recently too, and found the following (old) Community post that pointed my in the direction of splunkd web timeout: https://community.splunk.com/t5/All-Apps-and-Add-o... See more...
I started noticing this error recently too, and found the following (old) Community post that pointed my in the direction of splunkd web timeout: https://community.splunk.com/t5/All-Apps-and-Add-ons/Error-while-installing-an-app-on-Splunk-6-on-Windows/m-p/138027/highlight/true Sure enough I had the default 30 seconds in place, and after increasing that (and restarting Splunk) I haven't observed the message.   @TiagoTLD3 wrote: Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?       RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'SplunkdConnectionException' object has no attribute 'get_message_text' Traceback (most recent call last): File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 601, in simpleRequest serverResponse, serverContent = h.request(uri, method, headers=headers, body=payload) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1710, in request conn, authority, uri, request_uri, method, body, headers, redirections, cachekey, File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1425, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1377, in _conn_request response = conn.getresponse() File "/app/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/app/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/app/splunk/lib/python3.7/http/client.py", line 280, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/app/splunk/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/app/splunk/lib/python3.7/ssl.py", line 1079, in recv_into return self.read(nbytes, buffer) File "/app/splunk/lib/python3.7/ssl.py", line 937, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 483, in reload_settings raiseAllErrors=True File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 613, in simpleRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/identity_correlation/identity_manager/_reload: The read operation timed out',) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 811, in execute log_exception_and_continue=True File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 380, in do_run self.run(stanzas) File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 586, in run reload_success = self.reload_settings() File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 486, in reload_settings logger.error('status="Failed to reload settings" error="%s"', e.get_message_text()) AttributeError: 'SplunkdConnectionException' object has no attribute 'get_message_text'        
Thank you for your prompt response and help. Logs are coming from other sources e.g firewall. Maybe I should have used hostname/computername that is reaching out to those URLs
I am referring to the Deployment Server list.    When I go to [Settings > Forwarder Management > Clients] and click DELETE RECORD on a client - it says this option has been deprecated. I can still ... See more...
I am referring to the Deployment Server list.    When I go to [Settings > Forwarder Management > Clients] and click DELETE RECORD on a client - it says this option has been deprecated. I can still click delete, but the client never goes aways. See my attached screenshot. 
I would like to update my universal forwarders to send data to 2 separate endpoints for 2 separate splunk environments.  How can I do this using my Deployment Server.  I already have an App that I wi... See more...
I would like to update my universal forwarders to send data to 2 separate endpoints for 2 separate splunk environments.  How can I do this using my Deployment Server.  I already have an App that I will use for UF update.
Hi @yuanliu  Thank you for your suggestion. The subsearch has a max 50k limit, not 5k. If one or more subsearches hit the 50k limitation, I'd want to get an email notification indicating which subs... See more...
Hi @yuanliu  Thank you for your suggestion. The subsearch has a max 50k limit, not 5k. If one or more subsearches hit the 50k limitation, I'd want to get an email notification indicating which subsearch exceeded the 50k limit.  In the example  below, an email alert will be sent indicating that 2 subsearches exceed the 50k limit: search3 = 60k rows and search4 = 70k rows.  I can create a scheduled report that sends an email every day, but I am not sure if the report has the ability to send emails only when it meets a certain condition. search1 | join max=0 type=left ip [search ip="10.1.0.0/16" |eval this = "search 2"] | join max=0 type=left ip [search ip="10.2.0.0/16" |eval this = "search 3"] | join max=0 type=left ip [search ip="10.3.0.0/16" |eval this = "search 4"]  
Based on what I can understand, you can try using something like this and tweak it as needed. | makeresults | eval datetime_str="Thu 10 Oct 2024 08:48:12:574 EDT" | eval datetime=strptime(dateti... See more...
Based on what I can understand, you can try using something like this and tweak it as needed. | makeresults | eval datetime_str="Thu 10 Oct 2024 08:48:12:574 EDT" | eval datetime=strptime(datetime_str, "%a %d %b %Y %H:%M:%S:%3N %Z") | eval day_name=strftime(datetime, "%A"), day_of_month=strftime(datetime, "%d"), month=strftime(datetime, "%b"), year=strftime(datetime, "%Y"), week_number=strftime(datetime, "%U"), time_part=strftime(datetime, "%H:%M:%S") | fields datetime_str, datetime, day_name, day_of_month, month, year, week_number, time_part | eval hour=substr(time_part, 1, 2), minute=substr(time_part, 4, 2), second=substr(time_part, 7, 2)    
The reason for your error is "Poorly formatted data" . Regarding INDEXED_EXTRACTIONS=JSON, here is the good article on when/where it can be used. Can you please run this search and show me th... See more...
The reason for your error is "Poorly formatted data" . Regarding INDEXED_EXTRACTIONS=JSON, here is the good article on when/where it can be used. Can you please run this search and show me the output for your sourcetype? index=_internal source=*splunkd.log* AggregatorMiningProcessor OR LineBreakingProcessor OR DateParserVerbose WARN data_sourcetype="my_json" | rex "(?<type>(Failed to parse timestamp|suspiciously far away|outside of the acceptable time window|too far away from the previous|Accepted time format has changed|Breaking event because limit of \d+|Truncating line because limit of \d+))" | eval type=if(isnull(type),"unknown",type) | rex "source::(?<eventsource>[^\|]*)\|host::(?<eventhost>[^\|]*)\|(?<eventsourcetype>[^\|]*)\|(?<eventport>[^\s]*)" | eval eventsourcetype=if(isnull(eventsourcetype),data_sourcetype,eventsourcetype) | stats count dc(eventhost) values(eventsource) dc(eventsource) values(type) values(index) by component eventsourcetype | sort -count  
Hi, Before asking i did try to find but not able to locate the thread that has this kind of datetime values..so i had to come up with this new thread I have the datetime values in string format like... See more...
Hi, Before asking i did try to find but not able to locate the thread that has this kind of datetime values..so i had to come up with this new thread I have the datetime values in string format like Thu 10 Oct 2024 08:48:12:574 EDT   sometimes there may be a null in it - thats how it is  what is that i have to do with this is get/derive into separate columns day name like Thursday day of month like 10 month like Oct year 2024 week - weeknumber like 2 or 3 Time part into separate column like 08:48:12:57  - not worried about EDT separate the time components into again 08 as Hour 48 as Min 12 as Sec not worried about ms still looking for threads with this kind of but...again sorry this is a basic one just needs more searching