All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Could someone provide a solution for this problem? If anyone has the solution, please share it. Your assistance would be greatly appreciated.
We are currently indexing big log files (~1 GB in size) in our Splunk indexer using Splunk Universal Forwarder. All the logs data will be stored in a single index. We want to make sure the logs dat... See more...
We are currently indexing big log files (~1 GB in size) in our Splunk indexer using Splunk Universal Forwarder. All the logs data will be stored in a single index. We want to make sure the logs data is deleted after one week from the date it was indexed.  Is there a way to achieve the same?
Hi @asncari  I just started going through this Splunk UI Toolkit and was able to resolve the issue using the following method. There needs to be an update made in the webpack config. My setup: ... See more...
Hi @asncari  I just started going through this Splunk UI Toolkit and was able to resolve the issue using the following method. There needs to be an update made in the webpack config. My setup: node -v  v21.7.1 npm -v 10.5.0 yarn -v  1.22.22 Do the following 1. Install querystring-es3 and querystring using npm npm i querystring-es3 npm i querystring 2. Update the webpack.config.js file (MyTodoList\packages\react-to-do-list\webpack.config.js) const path = require('path'); const { merge: webpackMerge } = require('webpack-merge'); const baseComponentConfig = require('@splunk/webpack-configs/component.config').default; module.exports = webpackMerge(baseComponentConfig, { entry: { ReactToDoList: path.join(__dirname, 'src/ReactToDoList.jsx'), }, output: { path: path.join(__dirname), }, resolve: { fallback: { querystring: require.resolve('querystring-es3'), }, }, });   3. Now re-run the setup steps and let it build successfully and re-link any modules and then head into the react-to-do-list and execute the yarn start:demo command your build should be successful and if you navigate to localhost:8080 you should be able to see the react app. Note: I had a socket error ERR_SOCKET_BAD_PORT NaN exception for port 8080. I updated the build.js file to enforce use of 8080 (/packages/react-to-do-list/bin/build.js)   demo: () => shell.exec('.\\node_modules\\.bin\\webpack serve --config .\\demo\\webpack.standalone.config.js --port 8080'),   If the reply helps, karma would be appreciated.  
HI, I want to embed the dashboard in my own webpage. First, I found the "EDFS" APP, but after installing it and following the steps, I didn't see the "EDFS" option in the input. When this include in... See more...
HI, I want to embed the dashboard in my own webpage. First, I found the "EDFS" APP, but after installing it and following the steps, I didn't see the "EDFS" option in the input. When this include in the HTML file, it will responds with "refused to connect" :   <iframe src="https://127.0.0.1:9999" seamless frameborder="no" scrolling="no" width="1200" height="2500"></iframe>   Also, if add "trustedIP=127.0.0.1" in the server.conf file, when open Splunk web using "127.0.0.1:8000", it shows an "Unauthorized" error. Additionally, I found that adding "x_frame_options_sameorigin = 0" and "enable_insecure_login = true" in the web.conf file, and including this in the HTML file:   <iframe src="http://splunkhost/account/insecurelogin?username=viewonlyuser&password=viewonly&return_to=/app/search/dashboardname" seamless frameborder="no" scrolling="no" width="1200" height="2500"></iframe>   It will show the Splunk web login page, with the error message "No cookie support detected. Check your browser configuration." If try to login with the username and password, it still doesn't work and shows a "Server error" message. If use Firefox's incognito window open the HTML file, it will skip the login page and display "Unauthorized." Is there a way to solve these issues or alternative methods to display the dashboard on an external webpage? Thanks in advance.
Dear Karma,   We tried to use the suggested option. Can you please guide us where to update the file as we suspect on location where we writing Regex. Currently, we have updated windows folder on... See more...
Dear Karma,   We tried to use the suggested option. Can you please guide us where to update the file as we suspect on location where we writing Regex. Currently, we have updated windows folder on deployment server and /etc/system/local/ directory on HF level. Thanks, Suraj
There are some fields which are always present - source, sourcetype, host, _raw, _time (along with some internal Splunk's fields). But they each have their own meaning and you should be aware of the ... See more...
There are some fields which are always present - source, sourcetype, host, _raw, _time (along with some internal Splunk's fields). But they each have their own meaning and you should be aware of the consequences if you want to fiddle with them. In your case you could most probably add a field matching the appropriate CIM field (for example the dvc_zone). It could be a calculated field (evaluated by some static lookup listing your devices and associating them with zones) or (and that's one of the cases where indexed fields are useful) an indexed field, possibly added at the initial forwarder level.
The search might get cancelled if - for example - your user exceeds resource limits. If you are trying to reproduce the issue with user with another role having differently set limits (or not having ... See more...
The search might get cancelled if - for example - your user exceeds resource limits. If you are trying to reproduce the issue with user with another role having differently set limits (or not having limits at all) you might not hit the same restrictions.
Yes, certain source it's a bit hard for me to override the source name, I will try to see what can be done. I was looking at source as it's one of the few fields that seems to be common across multi... See more...
Yes, certain source it's a bit hard for me to override the source name, I will try to see what can be done. I was looking at source as it's one of the few fields that seems to be common across multiple models, eg network, authentication, change etc
Well, see into CIM definition and check which fields might be relevant to your use case. "zone" is a relatively vague term and can have different meanings depending on context. For example, the Net... See more...
Well, see into CIM definition and check which fields might be relevant to your use case. "zone" is a relatively vague term and can have different meanings depending on context. For example, the Network Traffic has three different "zone fields" src_zone, dest_zone and dvc_zone Of course filtering by source field is OK but it might not contain the thing you need.
We have recently migrated to smart store post migration SF and RF are not met. Can anyone help me with the  troubleshooting steps.
Join command (which should not really be used unless as a rule of thumb; unless there is a very good reason for it and there is no other way) uses two searches. So the .csv file you're talking about ... See more...
Join command (which should not really be used unless as a rule of thumb; unless there is a very good reason for it and there is no other way) uses two searches. So the .csv file you're talking about must be referenced somehow withihn such search. You can't just search from a file.
It might be true. There are two important things here. 1. You can only distribute _apps_ from the deployment server. So the apps get pulled by the deployment client (in this case your UF) and put in... See more...
It might be true. There are two important things here. 1. You can only distribute _apps_ from the deployment server. So the apps get pulled by the deployment client (in this case your UF) and put into your $SPLUNK_HOME/etc/apps directory 2. Splunk builds the "effective config" by layering all relevant config files according to precedence rules described here https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles So depending on where the install wizard puts those settings, you might or might not be able to overwrite them with an app deployed from DS. You can check where the settings are stored by running splunk btool inputs list --debug This will show you effective config entries along with the file in which it is defined. If it's in some app/something/... file, it can be overwritten, possibly with some clever app naming in order to put the config file alphabetically before/after another app. But if it's in system/local... you can't overwrite it with an app. And that's why it's not advisable to put settings in system/local unless you're really really sure you want to do that. If you put settings there you can't overwrite it in any way later unless you manually edit the system/local/whatever.conf file on that particular Splunk component (ok, there is an exception for clustered indexers, but that's for another time).
Hey @PickleRick  2. You are absolutely right. I just tried with different users on the same accelerated model, same query but different roles, and the restricted users has much less results. So... See more...
Hey @PickleRick  2. You are absolutely right. I just tried with different users on the same accelerated model, same query but different roles, and the restricted users has much less results. So, can I say the way forward seems to be one common data model then? Is there any recommended or easy way to perform filtering between Zones in a summary search for example?  Is using Where source=ZoneA* alright then?
After encoding it does run. But there is no results I do get results for queries without eval and URL-encoding.
OK. There are additional things to consider here. 1. Datamodel is not the same as datamodel accelerated summary. If you just search from a non-accelerated datamodel, the search is "underneath" trans... See more...
OK. There are additional things to consider here. 1. Datamodel is not the same as datamodel accelerated summary. If you just search from a non-accelerated datamodel, the search is "underneath" translated by Splunk to a normal search according to the definition of the dataset you're searching from. So all role-based restrictions apply. 2. As far as I remember (but you'd have to double-check it), even if you search from accelerated summaries, the index-based restrictions should still be in force because the accelerated summaries are stored along with normal event buckets in the index directory and are tied to the indexes themselves. 3. And because of that exactly the same goes for retention periods. You can't have an accelerated summary retention period longer than the events retention period since the accelerated summaries would get rolled to frozen witht the bucket the events come from. So there's more to it than meets the eye.
If this doesn't work, you could try using CSS with the token value
If your time periods are always 1 hour, you only need the start time and you can bin / bucket _time with span=1h, this gives you a time you can match on as well as your values. <your index> | bin _t... See more...
If your time periods are always 1 hour, you only need the start time and you can bin / bucket _time with span=1h, this gives you a time you can match on as well as your values. <your index> | bin _time as period_start span=1h | dedup period_start Value | eval flag = 1 | append [| inputlookup lookup.csv [ eval period_start = ``` convert your time period here ``` | eval flag = 2] | stats sum(flag) as flag by period_start Value ``` flag = 1 if only in index, 2 if only in lookup, or 3 if in both ``` | where flag = 2
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. No... See more...
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. Now, we want manage them with a Deployment Server. The point is this: those UF have been installed with graphical wizard. During this installation, it has been set which data to collect and to send to HF. So, the inputs.conf has been set during this phase, in a GUI manner. Now, in a Splunk course material (I don't remeber one, it should be the Splunk Enterprise Admin one), I got this warning: if inputs.conf, for Windows UF, are set with graphical wizard like in our case, Deployment Server could get some problems in interact with them, even it could not be able to manage them. Is this confirmed? Do you know in which section on documentation I can find evidence of this?
You could record which events have triggered an alert and when it was triggered in a summary index or keystore/csv and remove these from the subsequent set of results is within 24 hours.
Thanks for the hints. In terms of data retention all the sections will have similar policy. However, access grants can be an issue. In my use case, the dashboards will be monitored by section p... See more...
Thanks for the hints. In terms of data retention all the sections will have similar policy. However, access grants can be an issue. In my use case, the dashboards will be monitored by section personnel and also by the SOC. Therefore in terms of access, SOC will be able to see DMZ, ZoneA and ZoneB while the respective members of each section should only be able to see their zones (need-to-know basis policy) At the moment I am using different indexes so I can perform some transforms specific to each zones, as the syslog log sending formats are different due to the different log aggregator used by each zones. By using the different indexes in the heavy forwarder, I am able to perform some SED for particular log sources, and host & source override on the HF. I remember that I can limit access based on indexes, but I guess this is not possible with data models but will this be a concern? If I put them all in a data model, is it still possible to restrict access? For example, if the user can only manipulate views from dashboard and not be able to run searches themselves, that will still be OK. Pros and Cons in my mind: Separate data model: - Pro's: I can easily segregate the tstats query - Cons: Might be difficult to get an overview stats need to use appends and maintain each additional new zone. Each new data model will need to run periodically and increase the number of scheduled accelerations? Integrated data model: - Cons: might be harder to filter, eg between ZoneA, ZoneB and DMZ. Seems like I can filter only based on the few parameters in the model, eg source, host - Pros: Easier to maintain, as just need to add new indexes into the data model whitelist. Limit the number of Scheduled runs. - And as mentioned the point on data access? Will it be still possible to restrict? I am still quite new to Splunk so some of my thoughts might be wrong. Open to any advice, still in a conundrum.