All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I got a direct access to the sever again and I checked OS version. It is Red Hat Enterprise Linux release 9.4 (Plow). I will try to add pipeline and I will check if it helps. I am going to check if ... See more...
I got a direct access to the sever again and I checked OS version. It is Red Hat Enterprise Linux release 9.4 (Plow). I will try to add pipeline and I will check if it helps. I am going to check if there is not something connected with sysmon.  It was right. There were only few log entries in audit.log during the period. I checked it on filesystem. After my ssh connection there is more log entrie.  Last 90 minuts /opt/splunkforwarder/var/log/splunk/audit.log 2 /opt/splunkforwarder/var/log/splunk/conf.log 1 /opt/splunkforwarder/var/log/splunk/configuration_change.log 3 /opt/splunkforwarder/var/log/splunk/health.log 26 /opt/splunkforwarder/var/log/splunk/metrics.log 8975 /opt/splunkforwarder/var/log/splunk/splunkd-utility.log 10 /opt/splunkforwarder/var/log/splunk/splunkd.log 1055 /opt/splunkforwarder/var/log/watchdog/watchdog.log 3 /var/log/audit/audit.log 1337 /var/log/messages 9418 /var/log/secure 543 journald://sysmon 6482   I revealed an interesting correlation. You can see a "gap" or change in behavior in the graph. It starts after the UF is restarted. There are messages "Found currently active indexer. Connected to idx=X.X.X.X:9992:0, reuse=1." before UF restart. After 20 minutes from restart they are back.
Hi all, I installed splunk enterprise 9.2.1 on my machine recently. There are no other external apps or components installed. But the UI is very slow. The loading time for each webpage, including th... See more...
Hi all, I installed splunk enterprise 9.2.1 on my machine recently. There are no other external apps or components installed. But the UI is very slow. The loading time for each webpage, including the login page is slow. It takes around a minute to finish loading. Could anyone provide some suggestions as to why this is happening and how to fix it?
I cloned the "access_combined" sourcetype for the access logs, and now the fields are being extracted as desired. However, I'm unable to parse the request logs as expected. If anyone has some time, ... See more...
I cloned the "access_combined" sourcetype for the access logs, and now the fields are being extracted as desired. However, I'm unable to parse the request logs as expected. If anyone has some time, I would appreciate assistance with parsing the request logs. It would be really helpful.   Request Logs Format: [09/Aug/2024:07:50:37 +0000] xx.yyy.zzz.aa TLSv1.2 ABCDE-FGH-IJK256-LMN-SHA123 "GET /share/page/ HTTP/1.1" xxxxx [09/Aug/2024:07:50:37 +0000] xx.yyy.zzz.aa TLSv1.2 xxxxx-xxx-xxx256-xxx-xxx123 "GET /share/page/ HTTP/1.1" -
Hi Splunk experts, I want to compare the response code of our API for last 4 hours with last 2 days data over the same time. And if possible I would need results in a chart/table format where i... See more...
Hi Splunk experts, I want to compare the response code of our API for last 4 hours with last 2 days data over the same time. And if possible I would need results in a chart/table format where it shows the data as below. <Reponse Codes | Last 4 Hours | Yesterday | Day before Yesterday> As of now i am getting results in hours wise. Can we achieve this one in Splunk ? Can you guys please guide me in the right direction to achieve this.  
I tried the below configuration, but it did not help. Can you suggest what could be the reason for it ? 
Hi @sherwin_r , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma P... See more...
Hi @sherwin_r , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks @gcusello @gjanders @isoutamo for your inputs. I will have to decide which solution I am going for. I will update if either worked as expected (however I expect it to take a couple of  days). ... See more...
Thanks @gcusello @gjanders @isoutamo for your inputs. I will have to decide which solution I am going for. I will update if either worked as expected (however I expect it to take a couple of  days). Regards, Sherwin
@ITWhisperer  Today I used query of the default saved searches and manually collected to summary index from UI  ( collect command) rather than to use of python script. Data is not visible to indexes.... See more...
@ITWhisperer  Today I used query of the default saved searches and manually collected to summary index from UI  ( collect command) rather than to use of python script. Data is not visible to indexes.  
Thanks for this. i was able to utilise your solution to build a working process for what i need!
Hi, I requested a Dev license a while ago, but I don't hear anything from Splunk anymore. I have re-requested it a couple times, but still no answer. I even emailed Splunk, yet even that email is be... See more...
Hi, I requested a Dev license a while ago, but I don't hear anything from Splunk anymore. I have re-requested it a couple times, but still no answer. I even emailed Splunk, yet even that email is being ignored. I am new to Splunk and I just want to get started with the Developer license. How do I get my request to be approved? As in for real now, as I already attempted every standard solution. I just want somebody to approve my request, that's all.
Change your lookup to have * at the beginning e.g. *baddomain.com then change / create the definition for the lookup to do WILDCARD searches
Hi @glingaraj , you have a grace period (30 or 60 days I don't remember) after expiration to pass the exam, otherwise you have to pass again Power user exam: I know because I had this problem! Ciao... See more...
Hi @glingaraj , you have a grace period (30 or 60 days I don't remember) after expiration to pass the exam, otherwise you have to pass again Power user exam: I know because I had this problem! Ciao. Giuseppe
I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But inc... See more...
I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But include subdomains. Example: subdo1.baddomain.com subdo2.baddomain.com subdo1.baddomain2.com Please help, how to create that condition in spl query?
Is it possible to take splunk Admin certification after Splunk power user certification expired?
This is a different question to the one asked. How do you know the location of the servers and does the data for each panel come from the same search. If it comes from the same search then you would... See more...
This is a different question to the one asked. How do you know the location of the servers and does the data for each panel come from the same search. If it comes from the same search then you would be better of having a base search, see here https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/Savedsearches where your base search does all the data selection and aggregation and then each of the panels only shows the data from that base search that relate to the region of the server/clients they want.  
Hi @KendallW, I tried as you suggested but still it doesn't seem to work. Below is a part of my Dashboard code: "viz_myN1qvY3": {             "type": "splunk.table",             "dataSources"... See more...
Hi @KendallW, I tried as you suggested but still it doesn't seem to work. Below is a part of my Dashboard code: "viz_myN1qvY3": {             "type": "splunk.table",             "dataSources": {                 "primary": "ds_Ir18jYj7"             },             "title": "Availability By Market",             "options": {                 "backgroundColor": "transparent",                 "tableFormat": {                     "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)",                     "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)",                     "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)",                     "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)"                 },                 "headerVisibility": "fixed",                 "fontSize": "small",                 "columnFormat": {                     "Availability": {                         "data": "> table | seriesByName(\"Availability\") | formatByType(AvailabilityColumnFormatEditorConfig)",                         "rowColors": "> table | seriesByName('Availability') | pick(AvailabilityRowColorsEditorConfig)",                         "rowBackgroundColors": "> table | seriesByName(\"Availability\") | rangeValue(AvailabilityRowBackgroundColorsEditorConfig)",                         "align": "center"                     }                 }             },             "context": {                 "AvailabilityColumnFormatEditorConfig": {                     "number": {                         "thousandSeparated": false,                         "unitPosition": "after",                         "precision": 2                     }                 } The Availability column still has values aligned to right.  
Hi @Joshua2 , as also @KendallW said, this isn't the way to work of Splunk: you cannot locally store data n an UF. UF has a local cache that stores data if the Indexers aren't available, but only f... See more...
Hi @Joshua2 , as also @KendallW said, this isn't the way to work of Splunk: you cannot locally store data n an UF. UF has a local cache that stores data if the Indexers aren't available, but only for a few time and it isn't possible to copy cached logs in an usb drive. You should review your requirements with a Splunk Certified Architect or a Splunk Professional Services specialist to find a solution: e.g. send logs to a local syslog or copy them in text files (using a script) and then store them in the usb drive, but as I said, this solution must be designed by an expert, this isn't a question for the Community. Ciao. Giuseppe
Hi @sidnakvee , If you don't see any other host in _internal, this means that your pcs aren't connected to Splunk Cloud. as descibed at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data... See more...
Hi @sidnakvee , If you don't see any other host in _internal, this means that your pcs aren't connected to Splunk Cloud. as descibed at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsingforwardingagentsCloud, you have to download the Splunk Forwarder app from Splunk Cloud that contains credentials and configurations to connect to your Splunk Cloud instance. so the sequence of activity will be: install, Splunk Universal Forwarder on your pc, download and install the Splunk Forwarder app from your Splunk Cloud instance, download and install Splunk _TA_Windows ad Splunk App for sysmon from apps.splunk.com. enable wanted inputs in both the apps, enable sysmon on your pc, probably you need to restart Splunk on the Forwarder. Let me know. Ciao. Giuseppe
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the config... See more...
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the configuration until disable the prefered captain on my search head, but it still not resolved. I also know there is an app for this, but just want to clarify are this option still supported or not. Here my POST argument URL: https://otx.alienvault.com/taxii/discovery POST Argument: collection="user_otx" taxii_username="API key" taxii_password="foo" But the download status keep on TAXII feed pooling starting, and when i check on the PID information  status="This modular input does not execute on search head cluster member" msg="will_execute"="false" config="SHC" msg="Deselected based on SHC primary selection algorithm" primary_host="None" use_alpha="None" exclude_primary="None"  
As per @ITWhisperer 's comment, yes it is case sensitive. Use eval upper or lower to convert them all to the same case