All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I have some doubts about forwarder buffer, both universal and heavy. The starting point is this: I know that, if an indexer goes down and it is receiving data by a UF,  this has a buf... See more...
Hi Splunkers, I have some doubts about forwarder buffer, both universal and heavy. The starting point is this: I know that, if an indexer goes down and it is receiving data by a UF,  this has a buffering mechanism to store data and send them to proper destination once it is up and running again. If I'm not wrong, the limits of this buffer can be set on a config file (I don't remeber well wich one). Now, the question are: 1. Even if the answer can be obvious, this mechanism is already available for HF? 2. How can I decide the maximum size of my buffer? is there a pre set limit or it depends on my environments?
Hello Splunkers, Everything is in the title, I've read the limits.conf documentation, [thruput] maxKBps = <integer> I know that UF have a default value of 256 KBps, but does an Heavy Forwarder a... See more...
Hello Splunkers, Everything is in the title, I've read the limits.conf documentation, [thruput] maxKBps = <integer> I know that UF have a default value of 256 KBps, but does an Heavy Forwarder also has this kind of limitation ? Regards, VERDIN-POL Gaétan
Hi folks, I need your support to build a search query to track the migration activity. We have a requirement to track the host which will be migrated from windows os to linux os. The search should ... See more...
Hi folks, I need your support to build a search query to track the migration activity. We have a requirement to track the host which will be migrated from windows os to linux os. The search should visualize the movement of migration activity. I have two lookup files, one is windows os host details. Another one is linux os host . So I need to compare how many machines migrated from Windows to Linux over the time. (last 7 days). | inputlookup windows.csv | fillnull value="windows" OS | inputlookup linux.csv append=1 | fillnull value="linux" OS | stats dc(OS) as count values(lastSeen) as LastSeen, values(FirstSeen) as Firstseen by hostname | where count > 1 | mvexpand OS The above query doesn't show expect the result  I would really appreciate, if someone has any ideas or suggestions on this.       
How to explicitly say App is configured with Python? * I have a custom setup page (dashboard) in my App. * I'm using Python rest-endpoint to configure the App and then executing the below REST endp... See more...
How to explicitly say App is configured with Python? * I have a custom setup page (dashboard) in my App. * I'm using Python rest-endpoint to configure the App and then executing the below REST endpoint (code) to ask Splunk that App has been successfully configured.   * This concept works fine in regular SHs. * We have faced an issue in Search Head Cluster (Splunk version 9.0.1) * Our app.conf with parameter is_configured = 1 is replicated across all the SHs. * Problem is that even though the conf file is being replicated perfectly fine, other SHs are still redirecting to the setup page. * Below are some other endpoints that I tried to use as an alternate but had no luck. /servicesNS/nobody/<app>/properties/app/install (pass is_configured as parameter) /services/apps/local/<app>?output_mode=json * BTW, all of the endpoints which I tried gave success responses but the above issue persisted.   Has anyone know the right way to do it? Is this a bug in 9.0.x?
Hi, Every time I apply shcluster bundle the deployer pushes all apps in /opt/splunk/etc/shcluster/apps to the SHC members, even if there hasen't been any modification the the app. I can see that th... See more...
Hi, Every time I apply shcluster bundle the deployer pushes all apps in /opt/splunk/etc/shcluster/apps to the SHC members, even if there hasen't been any modification the the app. I can see that the checksum is the same on the deployer and the SHC members but some how the deployer still pushes the app. We are using push mode merge_to_default and are on Splunk version 9.0.1. Every apply shcluster bundle takes several hours. Any ideas?
Hey guys. When I try to export a dashboard that I have with tables out from Dashboard Studio, the export includes scrollbars. Is there a way to remove them, or code the tables so that they expand wit... See more...
Hey guys. When I try to export a dashboard that I have with tables out from Dashboard Studio, the export includes scrollbars. Is there a way to remove them, or code the tables so that they expand with their content and break lines over pages?
Hey everyone. I had a dashboard on the old Dashboards panel and I didn't like the way it was exported. So I cloned it into Dashboard Studio and while the export looks nice now, it lost the functional... See more...
Hey everyone. I had a dashboard on the old Dashboards panel and I didn't like the way it was exported. So I cloned it into Dashboard Studio and while the export looks nice now, it lost the functionality to color table cells based on value. Anyone know how to make this work through the UI or through JSON?
I am getting this error when trying to run this command sudo useradd -m splunk. Can anyone help?    
Hi Team, I want a splunk search query for alert creation. My requirement is service Response time is > 3 seconds and  if it is continuous more than 10 min (> 10 min), then only I need to raise an ale... See more...
Hi Team, I want a splunk search query for alert creation. My requirement is service Response time is > 3 seconds and  if it is continuous more than 10 min (> 10 min), then only I need to raise an alert. In search query i tried to use the where option for the response time, but for time condition can't able to write the query. Below is my search query. please help me how to add the time condition value in query itself.   index=kpidata | eval ProcessingTime=ProcessingTimeMS/1000 | where ProcessingTime > 3
Could anyone please tell me how to connect to external Postgres DB so I could see data in this dashboard? I cannot find instruction anywhere. Thanks, pawelF
Dear All, I am running on Splunk Cloud 9.0.2208.3 as a sc_admin-rolled user and I have created a load of calculated fields as my user for the administrators to see data in the associated Data Models... See more...
Dear All, I am running on Splunk Cloud 9.0.2208.3 as a sc_admin-rolled user and I have created a load of calculated fields as my user for the administrators to see data in the associated Data Models. I have made the Calculated Fields non-private - i.e. in the app/global as appropriate. All is good and others can see the data in the DMs, however, I am leaving the company and they are going to have to delete my user as a part of the offboarding process (plus I don't want to be blamed if someone hacks the system and it was my user, somehow). As a result, I want to reassign my KBOs to the nobody user, so that my user is no longer in the circuit and can be safely deleted, however, when I go into "Reassign Knowledge Objects", I cannot see the Calculated Fields at all. I have made sure that I can see KBOs in all apps and all users and still no dice. Why can't I reassign these KBOs??? How can I do this? Or perhaps, do I NEED to do this?
I developed an Add-on with the add-on builder that used python code to send events to my splunk. I first tested that add-on locally (worked prefecrtly) and then tried it on a different splunk enterpr... See more...
I developed an Add-on with the add-on builder that used python code to send events to my splunk. I first tested that add-on locally (worked prefecrtly) and then tried it on a different splunk enterprise to see if it works there. After clicking on the "install app from file" option and installing it from the SPL file, the icon of the app showed up where the apps are. I tried to click the app but then, what would normally take me to the inputs page (my app requires an input), prompted an error that said "Failed to load Inputs Page, This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page" and when I clicked on details it said "Error: Request failed with status code 500" To solve this, I tried a few things (none of them worked) - I tried installing the add on builder on the other splunk enterprise, which gave me a blank screen every time I clicked on it, I tried opening a splunk cloud of my own and test it there but  didn't see the "install app from file" option and I tried looking for errors in the console and only saw "Uncaught (in promise) Error: Request failed with status code 500" a few times. Does someone know a reason as to why it might happen? Ways to fix it?
I have this search which builds a table my_search | timechart span=1d sum(eval(b/1024/1024/1024)) AS volume_b it will build a table like this: 24 October 18 25 October 10 26 October 25... See more...
I have this search which builds a table my_search | timechart span=1d sum(eval(b/1024/1024/1024)) AS volume_b it will build a table like this: 24 October 18 25 October 10 26 October 25 27 October 30   Now, from this search I want to do a simple count: how many days have a volume>15 for the table above it would just show count: 3
Hi all, I am a fresher in Splunk. Recently, I met an problems and would like to ask whether anyone has ideas. I compose my spl code to generate this kind of table.  city              produce na... See more...
Hi all, I am a fresher in Splunk. Recently, I met an problems and would like to ask whether anyone has ideas. I compose my spl code to generate this kind of table.  city              produce name        count city1            product 1            perchase count 1-1 city1            product 2            perchase count 1-2 city2            product 1            perchase count 2-1 city2            product 2            perchase count 2-2 city3            product 1            perchase count 3-1 city3            product 2            perchase count 3-1 But I would like to transfer the table into this kind of table. product name          city1                                 city2                                 city 3 product 1         perchase count 1-1     perchase count 2-1     perchase count 3-1 product 2         perchase count 1-2     perchase count 2-2     perchase count 3-2 Do any experts have ideas how to implement SPL code to fulfill that? I tried to use transpose column_name=city, but in vain. The output doesn't look as I expect. Thank you so much ! 
Hello,    Simon Richardson developed the add-on TA_Zimbra. I'm new in Zimbra and I would like to improve the add-on with new extraction and capabilities. I would like to work in the right way, pr... See more...
Hello,    Simon Richardson developed the add-on TA_Zimbra. I'm new in Zimbra and I would like to improve the add-on with new extraction and capabilities. I would like to work in the right way, producing shareable results. As what I can see, this add-on is still under development. Is there a way to contact Simon Richardson?   Thank you very much Kind Regards Marco
This is my first question here! And I just started my journey with Splunk. I have two files test1.csv and test2.csv with same column names in both the files: hashValue, updatedTime, alertName.  How... See more...
This is my first question here! And I just started my journey with Splunk. I have two files test1.csv and test2.csv with same column names in both the files: hashValue, updatedTime, alertName.  How do I compare both the files w.r.t there column values and output only the difference? Thanks 
Exceptions Day1 Day2 Day3 Abc 5 4 3 Start 3 4 4 xyz 3 2 5  
Hi team,   i have multiselect input filter , i need to set the value of multiselect input filter value to drilldown value of pie chart how to change the input filter value to drill down token v... See more...
Hi team,   i have multiselect input filter , i need to set the value of multiselect input filter value to drilldown value of pie chart how to change the input filter value to drill down token value. Please guide on this
Hello, I am trying to fix an error in for an inherited add-on that i am maintaining. category: app_cert_validation description: Detect usage of JavaScript libraries with known vulnerabilities. ex... See more...
Hello, I am trying to fix an error in for an inherited add-on that i am maintaining. category: app_cert_validation description: Detect usage of JavaScript libraries with known vulnerabilities. ext_data: { [+] message_id: 7002 rule_name: Validate app certification severity: Fatal solution: 3rd party CORS request may execute parseHTML) executes scripts in event handlers jQuery before 3.4.0, as used in Drupal Backdrop CMS, and other products, mishandles jQuery.extend(true, (], ..) because of Object.prototype pollution Regex in its Query.htmlPrefilter sometimes may introduce XSS Regex in its jQuery.htmlPrefilter sometimes may introduce XSS reDOS - regular expression denial of service Regular Expression Denial of Service (ReDoS) Regular Expression Denial of Service (ReDoS) This vulnerability impacts pm (server) users of moment. js, especially if user provided locale string, eg fr is directly used to switch moment locale. status: Fail sub_category: Checks related to JavaScript usage I checked some community questions, one of the answer mentions the below fix. Export the app from any Add-on Builder Import the app into Add-on Builder v4.1.0 or newer Download the app packaged from Add-on Builder v4.1.0 or newer I don't have the original app export and cannot import that to new AOB. I tried importing a tgz file but that gives an error. Is there any other way I can fix this or something else i can try?
Hi  I am some confuse the documentation for Role Based Field Filtering as following.  https://docs.splunk.com/Documentation/Splunk/9.0.1/Security/planfieldfiltering   from the documentation, ... See more...
Hi  I am some confuse the documentation for Role Based Field Filtering as following.  https://docs.splunk.com/Documentation/Splunk/9.0.1/Security/planfieldfiltering   from the documentation, the restricted command (tstats) return sensitive data that a role with field filters might not be allowed to access and it is very risky that someone with malicious intentions tries to use them to circumvent role-based field filtering.  it provide the workaround that assign one of two capability to role that have field filter.  the one of capability is he run_commands_ignoring_field_filter. here is my question, user_A have a role that include run_commands_ignoring_field_filter capability and configured field filtering and User_A run tstats to search information which include field that required masking, what happen in the result result?  I wonder if it show sensitive data or making data ?   thank you in advanced.     These commands can return sensitive data that a role with field filters might not be allowed to access. They might pose a potential security risk for your organization if someone with malicious intentions tries to use them to circumvent role-based field filtering. As a result, the Splunk platform restricts these commands when used by people with roles that are configured with field filtering.