All Topics

Top

All Topics

Past 1 week, I'm trying to create csv using splunk add-on builder, but still i didn't figure out the solution. Anybody have the solution pls share.  Thanks Advance
I am using the smartstore function of splunk. The S3 protocol is used to tier data to ceph storage. Can Splunk SmartStore delete buckets from object storage using the S3 protocol?
Hello, In order to make syslog communication through TLS work, I followed this procedure https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates  on one node. I backe... See more...
Hello, In order to make syslog communication through TLS work, I followed this procedure https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates  on one node. I backed up the original cacert.pem and copy the new root certificate just created to $SPLUNK_HOME/etc/auth/cacert.pem I also copied the server certificate to $SPLUNK_HOME/etc/auth/server.pem and change the configuration in files $SPLUNK_HOME/etc/apps/launcher/local/inputs.conf and $SPLUNK_HOME/etc/system/local/server.conf Since then, I have the error log : ERROR LMTracker - failed to send rows, reason='Unable to connect to license master=https://xxx:8089 Error connecting: SSL not configured on client' (xxx correspond to the license master server) So I tried to restore original cacert.pem and server.pem but i still get the error. I tried to connect to the license master through TLS with curl but I get an error (Peer's Certificate has expired) I checked the license master certificate and it appears to be expired since one month. But license verification is working from other Splunk nodes (on which I did not change root certificate) and curl too. Also I am not able to renew this certificate as it is sign by the default root CA and I do not have the passphrase of the private key. The connection to the web interface of this node does not work, I get an internal server error. Could you please help me to figure out what is blocking the license verification? Do not hesitate to tell me if you need more details. Thank you
I have used CSS to increase the size of pie chart legend labels;     .highcharts-data-label text tspan { font-size:14px !important; word-break: break-all !important; }     On smaller ... See more...
I have used CSS to increase the size of pie chart legend labels;     .highcharts-data-label text tspan { font-size:14px !important; word-break: break-all !important; }     On smaller panels and with long text however the text overflows out of the panel. I've attempted to use word break to stop this from happening without any result. Please refer to the screenshot attached. Is there anyway to break and stay within the panel?
Hi splunkers, I heard some rumors that Microsoft 365 App and anything related to Microsoft Apps are planning to change their free versions to paid versions. Please let me know the truth about the a... See more...
Hi splunkers, I heard some rumors that Microsoft 365 App and anything related to Microsoft Apps are planning to change their free versions to paid versions. Please let me know the truth about the above one. Kindly please help, Best Regards.
Hello Splunk team, do you know if it's possible to do a splunk search in a python script? taking into account the selection of the desired period (last week, last month ...). for example for a simp... See more...
Hello Splunk team, do you know if it's possible to do a splunk search in a python script? taking into account the selection of the desired period (last week, last month ...). for example for a simple search : | append [search sourcetype=ESP_Histo env="Web" service="Application" apps="Web Sharepoints" fn="*" managed_entity="*- URL" sampler="SHAREPOINT - URL" ID_EVENT | eval Applications=case( apps like "%Web Sharepoints%","WEB SHAREPOINTS") | `transactions` ] | eval max_time=if(info_max_time == "+Infinity", now(), info_max_time) | eval min_time=if(info_min_time == "0.000", 1577836800, info_min_time) | eval periode1=max_time-min_time | stats sum(duration) AS durationindispo by Applications, periode1 | outputcsv append=true override_if_empty=false web_search
Hi there, i'm using Splunk Enterprise 7.0.3 and Splunk DB Connect 3.1.3. i have this Data Lab Output configuration using real time saved search to export Splunk logs to a MySQL DB and it's working ... See more...
Hi there, i'm using Splunk Enterprise 7.0.3 and Splunk DB Connect 3.1.3. i have this Data Lab Output configuration using real time saved search to export Splunk logs to a MySQL DB and it's working just fine until recently. What i'm experiencing now is the DB connect doesn't export some of the data from Splunk to DB (some of it is missing from the DB). The issue is resolved when i'm using scheduled search with cron schedule but i'd like to use real time saved search again if possible. I hope i can get a solution. Thanks in advance.   EDIT: i forgot to tell you that the Data Lab Output actually stopped exporting from Splunk Log at all approximately 2 weeks ago and i changed the real time saved search timerange from all time real time to all time 15 minute window and then it works again but the issue i explained in the beginning happens.
Hi everyone,   I am trying to remove partial duplicate in the same field, but couldn't find a solution yet. For instance I have for the same field these values: http://www.g http://www.go http:... See more...
Hi everyone,   I am trying to remove partial duplicate in the same field, but couldn't find a solution yet. For instance I have for the same field these values: http://www.g http://www.go http://www.google.com   I would like to only keep the value (http://www.google.com), I tried to use dedup and mvfilter: eval url_in_parameter=mvfilter(!url_in_parameter LIKE url_in_parameter*) I am still a begginer on Splunk and couldn't find any similar topics on internet.   Thanks for your help and have a good day.
I have two fields skill1 and skill2 skill2:             skill1:           Both these queries are producing results:   timechart span=1d count by skill1 timechart span=1d coun... See more...
I have two fields skill1 and skill2 skill2:             skill1:           Both these queries are producing results:   timechart span=1d count by skill1 timechart span=1d count by skill2   I want to create a separate variable skill which contains difference of skill1's values and skill2's values and create a timechart out of it. I tried doing:   timechart span=1d count by skill1-skill2   But it's not working. Any help would be appreciated!
Our Splunk deployment is on-prem and we have automated deployment based on app updates. Users create a PR, make updates and push to bitbucket. Automated tests are done and if everything is OK the cha... See more...
Our Splunk deployment is on-prem and we have automated deployment based on app updates. Users create a PR, make updates and push to bitbucket. Automated tests are done and if everything is OK the changes are in production in the hour. We are considering the move to Splunk Cloud but are worried about the manual app vetting process. That process seems cumbersome and might mean that the 5-10 changes across multiple apps we have each week will form an endless backlog. How are you finding the app vetting process? Is it working OK or are you finding it painful? This is a question aimed at large corporates who have moved from on-prem or BYOL to Splunk cloud.
Hi , i have 2 queries . (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) as Blocked by Local... See more...
Hi , i have 2 queries . (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) as Blocked by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total" | append [search (index=abc  OR index=def) blocked =0 | rex field=index "(?<Local_Market>\w.*?)_" | stats count as Detected by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total"] | stats values(*) as * by Local_Market | transpose 0 header_field=Local_Market column_name=Local_Market | addinfo | eval date=info_min_time | fieldformat date=strftime(date,"%m-%d-%Y") | fields - info_* 1> above query is giving me the correct date time ..but when i am scheduling this as report its coming  as Epoch time in csv file .. 2>  also how can we get date field in first column instead of last column without disturbing the other fields .. thanks in advance
The internal metrics logs has information on tcp in and out. This is useful to see the traffic profile between Splunk tiers. We planning our migration to the cloud and this will help with sizing the ... See more...
The internal metrics logs has information on tcp in and out. This is useful to see the traffic profile between Splunk tiers. We planning our migration to the cloud and this will help with sizing the cloud pipe. Has anyone created dashboards with this information they could share with us?
Hi All, My customer want to collect mail transmission/reception logs using the "Microsoft O365 Email Add-on for Splunk" App. But I have no data received and getting error as below.(I attached the f... See more...
Hi All, My customer want to collect mail transmission/reception logs using the "Microsoft O365 Email Add-on for Splunk" App. But I have no data received and getting error as below.(I attached the full error logs) ----------------------------- 2021-09-15 11:03:56,355 ERROR pid=50226 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/ta_microsoft_o365_email_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/o365_email.py", line 136, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/input_module_o365_email.py", line 182, in collect_events messages.append(messages_response['value']) KeyError: 'value' ----------------------------- Thank you in advance. Best regards, Bob Hwang
Hello splunk  I want to create a v  model structure  in dashboard I am not aware of it can you please help me with the it.I am posting a picture  below. How I want to create dashboard. Th... See more...
Hello splunk  I want to create a v  model structure  in dashboard I am not aware of it can you please help me with the it.I am posting a picture  below. How I want to create dashboard. Thanks in advance Renuka  
Hello All. In indexer clustering , one peer is not searchable  and status is down . What is the process to fix it please anyone help me  this is the big issue we are facing .
When you get an incident in splunk-ES, the notable is often populated with 'additional fields'. some of these custom, some out of the box. Im looking to see what fields would be displayed for a notab... See more...
When you get an incident in splunk-ES, the notable is often populated with 'additional fields'. some of these custom, some out of the box. Im looking to see what fields would be displayed for a notable from either searching the notable macro or the api if need be. searching the notable macro, I often i get 100+ fields for a notable, but maybe only 15 are displayed in the notable itself, where some other notable may only have 5 displayed. Is there a way to do a search that indicates wich fields would be displayed in the 'additional fields' of the notable? for reference the additional fields im talking about are mentioned here under 'Add a field to the notable event details': https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Customizenotables
Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on h... See more...
Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on hourly updates for this, but that part is much less of a problem. I do not have access to summary indexing here, so don't bother mentioning it! It so happens that the admins of this site have enabled tsidx reduction so I'm seeing this warning:   Search on most recent data has completed. Expect slower search speeds as we search the reduced buckets.   Not in itself a huge problem as in the search box I eventually see all the available results. HOWEVER... When I run this search into outputlookup, it doesn't seem to wait for all the results to render and goes off and writes the lookup table without (very annoyingly) waiting for one particular column to appear. Is this a bug? Has anyone else seen this? Any way around it? It seems to be entirely consistent and I want to stop hammering the splunk instance with this very expensive search! Cheers, Charles
Hello All, I have set up the Splunk Add-On and Splunk App for Unix and Linux. Data is flowing properly however I am having an issue with alerts. I am trying to set up alerts for various things to s... See more...
Hello All, I have set up the Splunk Add-On and Splunk App for Unix and Linux. Data is flowing properly however I am having an issue with alerts. I am trying to set up alerts for various things to slack. I have the first alert on memory working. I set it to 1 min real-time and it seems to work just fine. This is the working query:    `os_index` source=vmstat | where max(memUsedPct) > 90 | stats max(memUsedPct) by host   However, when I try to do the same for disk, it does not work. I have tried expanding to 5min and 30min real-time windows but the only way I get data to show up in this query is by removing the where clause. I also tried using something like latest() instead of max() but that didn't help. What am I doing wrong here?   `os_index` source=df | where max(UsePct) > 10 | stats max(UsePct) by host   Thank you, jackjack
I'm trying to use a link list dropdown to fill in URL's of specific csv files from the splunk lookup editor app into a dashboard panel.  I've followed the instructions to re-enable iframes in splunk ... See more...
I'm trying to use a link list dropdown to fill in URL's of specific csv files from the splunk lookup editor app into a dashboard panel.  I've followed the instructions to re-enable iframes in splunk 8.0x+ and the frame itself forms, but its stuck saying "loading"  After doing some web browser debugging, it appears this is because the lookup editor app is calling javascript and there is an HTML tag that sets class="no-js" but I'm not a very good HTML debugger and I can't tell if this is being done in the parent apps css or in the child apps css.  I think however, if I could get the iframe to support the javascript I should be in good shape.  Its all the same splunk instance, but I haven't had any luck yet.  Any help is greatly appreciated!
I have the following log !!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory... See more...
I have the following log !!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory.com' port='443' count='1' !!! --- HUB 002329bvpc123cw.branches.sbicdirectory.com:443 is unavailable --- !!! host='005558bvpc5ce4w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 005558bvpc5ce4w.za.sbicdirectory.com:443 is unavailable --- !!! host='41360jnbpbb758w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 41360jnbpbb758w.za.sbicdirectory.com:443 is unavailable --- !!! host='48149jnbpbb041w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 48149jnbpbb041w.za.sbicdirectory.com:443 is unavailable --- !!! user='pips_lvl_one_user' password='pips_lvl_one' quota='user'   I have above log and I'm struggling to extract the colored items ctxsdc1cvdi013.za.sbicdirectory.com = as workstation ID is unavailable = as  status molefe = as quota