we've ran into two odd issues when testing a Shibboleth implementation, but I'm not sure if they are related. AQRs are setup so users are not cached, we noticed that a user without content only show ...
See more...
we've ran into two odd issues when testing a Shibboleth implementation, but I'm not sure if they are related. AQRs are setup so users are not cached, we noticed that a user without content only show on a search head the load balancer has sent them to and therefore content cannot be assigned to them unless they've accessed all the search heads. along the content line, replicate certificates does not do what it says. it does not replicate the idp cert across the search heads, but as soon as that was enabled does content, users, and saml groups replicate peer-to-peer. I assume we have an incorrect setting in place, but any help is very much appreciated!
If you are not seeing failed logins in your splunkd.log, you can try updating the log.cfg or log-local.cfg file to add debugging. This should give you more information in the splunkd.log. The log.c...
See more...
If you are not seeing failed logins in your splunkd.log, you can try updating the log.cfg or log-local.cfg file to add debugging. This should give you more information in the splunkd.log. The log.cfg/log-local.cfg file is located in the .../splunk/etc directory. Find "category.AuthenticationProviderLDAP=INFO" and change INFO to DEBUG. Restart the Splunk service. This should at least give you the username it is finding. There may be other options you can change to DEBUG to give you more information.
Splunk doesn't automatically update online - you have to manually download a new version and upload it to server(s). The sources for app downloads are listed in https://docs.splunk.com/Documentatio...
See more...
Splunk doesn't automatically update online - you have to manually download a new version and upload it to server(s). The sources for app downloads are listed in https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf#Remote_applications_configuration_.28e.g._SplunkBase.29
I'm going to try and push through today but I reckon I'll be in the same place Monday. I really appreciate your help so far and would be so grateful if dig in deeper next week. Thanks again and enjoy...
See more...
I'm going to try and push through today but I reckon I'll be in the same place Monday. I really appreciate your help so far and would be so grateful if dig in deeper next week. Thanks again and enjoy your weekend!
Good Day. I've browsed for some time the official documentation and the forum, and I haven't found exactly the answer I need, so... this is my question (it applies to HF and Enterprise). I would li...
See more...
Good Day. I've browsed for some time the official documentation and the forum, and I haven't found exactly the answer I need, so... this is my question (it applies to HF and Enterprise). I would like to limit the internet access of my HF. Over the months, two possible connections come to my mind: Updating Splunk Updating Plugins from splunkbase After some reseach, I haven't found what IP addresses or URL are the right ones to configure in the firewall. Any help?
Hi Splunkers, I recently noticed an issue while opening dashboards—both default and custom app dashboards—in Splunk. I'm encountering a console error that seems to be related to a JavaScript file na...
See more...
Hi Splunkers, I recently noticed an issue while opening dashboards—both default and custom app dashboards—in Splunk. I'm encountering a console error that seems to be related to a JavaScript file named layout.js. The error: Failed to load resource: the server responded with a status of 404 (Not Found) :8000/en-US/splunkd/__raw/servicesNS/sanjai/search/static/appLogo.png:1 The screenshots above are taken from the browser’s developer console—specifically the Network tab. I'm unsure why this is occurring, as I haven’t made any recent changes to the static resources. Has anyone else run into a similar issue? Would appreciate any insights!
Hello PickleRick, Apologies: I had the wrong impression. The events were not really gone: I deleted them myself, hoping that they would be reindexed immediately with the correct _time, as they did i...
See more...
Hello PickleRick, Apologies: I had the wrong impression. The events were not really gone: I deleted them myself, hoping that they would be reindexed immediately with the correct _time, as they did in the previous trials. This time however, after deleting them from the indexer they were not reindexed automatically, so I incorrectly used the word "gone". So I added a crcSalt temporarily in the inputs.conf of the forwarder, all events were reindexed with the correct _time and everything worked perfectly thanks to your suggestion. Many thanks again for your solution
Splunk UF can read a CSV file but cannot as lookup. UF recognize it as regular log file. and no splunk does not automatically upload it as lookup. you might consider using splunk UF to monitor the C...
See more...
Splunk UF can read a CSV file but cannot as lookup. UF recognize it as regular log file. and no splunk does not automatically upload it as lookup. you might consider using splunk UF to monitor the CSV file. create a monitoring stanza: [monitor://path...] sourcetype= index= and then set the props.conf on the indexer with this setting: [theCSVfile] INDEXED_EXTRACTION=csv
hello, first and formost, always start with creating a backup: 1- create a backup of $SPLUNK_HOME/etc of every single splunk server you have 2- start with stand alone servers and move on to clus...
See more...
hello, first and formost, always start with creating a backup: 1- create a backup of $SPLUNK_HOME/etc of every single splunk server you have 2- start with stand alone servers and move on to clusters 3- grab wget command from splunk.com 4- if you choose to use rpm, - stop splunk : /opt/splunk/bin/splunk stop - run: rpm -U splunk_package.... 5-start splunk: /opt/splunk/bin/splunk start --accept-license 6- check if everything went well: tail -f /opt/splunk/var/log/splunk/splunkd.log
hello, it looks like both add-ons are archived. Consider creating a script if not, http event collector HEC will be your best bet. as far as I know Jira support HEC.
@deepdiver From all messages present here it appears something around KVStore here you can check KVStore status first using below command: ./splunk show kvstore-status If its up and running the...
See more...
@deepdiver From all messages present here it appears something around KVStore here you can check KVStore status first using below command: ./splunk show kvstore-status If its up and running then its good otherwise you have to check first Splunkd logs to get the exact ERROR logs.
Hi Splunkers! I'm currently working on a project where the goal is to visualize various KPIs in Splunk based on Jira ticketing data. In our setup, Jira is managed separately and contains around ...
See more...
Hi Splunkers! I'm currently working on a project where the goal is to visualize various KPIs in Splunk based on Jira ticketing data. In our setup, Jira is managed separately and contains around 10 different projects. I'm considering multiple integration methods: Using Jira Issue Input Add-on Using Splunk for JIRA Add-on Before diving in, I'd love to hear from your experiences: Which method do you recommend for efficient and reliable integration? Any specific limitations or gotchas I should be aware of? Is the Jira Issue Input Add-on scalable enough for multiple Jira projects? Thanks in advance for your insights!
Ok I quickly managed to reindex the file through a temporary crcSalt in the forwarder inputs.conf. The events are now indexed fine thanks to removing the DATETIME_CONFIG=NONE Many thanks to Pickle...
See more...
Ok I quickly managed to reindex the file through a temporary crcSalt in the forwarder inputs.conf. The events are now indexed fine thanks to removing the DATETIME_CONFIG=NONE Many thanks to PickleRick for providing the solution.
Ok, I removed the: DATETIME_CONFIG=NONE But now all events are gone. I also tried to empty the index: /opt/splunk/bin/splunk clean eventdata -index lts This action will permanently erase all even...
See more...
Ok, I removed the: DATETIME_CONFIG=NONE But now all events are gone. I also tried to empty the index: /opt/splunk/bin/splunk clean eventdata -index lts This action will permanently erase all events from the index 'lts'; it cannot be undone. Are you sure you want to continue [y/n]? y Cleaning database lts. And restarted both the indexer and the forwarder. But no events are sent.
Hi Romedawg, We're currently encountering the same issues, could you be able to elaborate a bit more on your solution? Specifically, what command did you use to generate the CA/self-signed certific...
See more...
Hi Romedawg, We're currently encountering the same issues, could you be able to elaborate a bit more on your solution? Specifically, what command did you use to generate the CA/self-signed certificate? You're making a server.pep file, but do you use that elsewhere in your setup? Also, in the second part of your post, you run an openssl verify on server.crt. Could you clarify where that file comes from? Is it the same one referenced in your sslconfig stanza, server.pem? Or is this another file entirely?