All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I use proxy to work around the port issue. I get the same thing as the curl command now. the web ui show nothing, and I inspect it,  "browser-not-supported"? I try multiple browser(Chrome, Edge,... See more...
I use proxy to work around the port issue. I get the same thing as the curl command now. the web ui show nothing, and I inspect it,  "browser-not-supported"? I try multiple browser(Chrome, Edge, Firefox)  
I created roles using SAML config then assign the role to user when they are created. I looked into all users and they don't have a default time zone set to their account. Yes they can set it through... See more...
I created roles using SAML config then assign the role to user when they are created. I looked into all users and they don't have a default time zone set to their account. Yes they can set it through preference and I also can manually change it in Setting for all of the users but it's tedious to do one by one. I want to config it so that all the old and future users would have PT time zone automatically.
What you are meaning with “change tz for all users”? If they are sitting in PT time zone and they are using “use system TZ settings” then it is already in PT time zone if their workstations/ laptops ... See more...
What you are meaning with “change tz for all users”? If they are sitting in PT time zone and they are using “use system TZ settings” then it is already in PT time zone if their workstations/ laptops are correctly configured. If you want to change that also for people which are not sitting in PT zone, then I ask why? In internally splunk is storing all times as UTC. Then it shows times by users time zone or what they have set in their account preferences.
This confirms that there are some filtering on network side or even this splunk server. You could check if there is e.g. iptables running with “iptables -vL” command (if I recall right). But as @Pick... See more...
This confirms that there are some filtering on network side or even this splunk server. You could check if there is e.g. iptables running with “iptables -vL” command (if I recall right). But as @PickleRick said, more probable there is network level FW between your workstation and splunk server. In this case your options are: ask help from your network admins and/or try ssh tunneling from your local node to splunk server. But check first that this is allowed in your organization!
Now there is Splunk Enterprise 10.2.0 beta available on voc.splunk.com. If you want to participate this beta program you need to go that site and apply to this beta. Beta license seems to be valid un... See more...
Now there is Splunk Enterprise 10.2.0 beta available on voc.splunk.com. If you want to participate this beta program you need to go that site and apply to this beta. Beta license seems to be valid until November, so probably there are still lot of things which are not ready yet? I just installed this on M3 max and at least it starts
Hello, so I've created a props.conf on the indexer under the Windows_TA local folder and put his in: [WinEventLog] MAX_DAYS_AGO = 7 [XmlWinEventLog] MAX_DAYS_AGO = 7 onboarded another Windows S... See more...
Hello, so I've created a props.conf on the indexer under the Windows_TA local folder and put his in: [WinEventLog] MAX_DAYS_AGO = 7 [XmlWinEventLog] MAX_DAYS_AGO = 7 onboarded another Windows Server - still ingested windows event logs going back a few years. Any ideas why that's not working? Kind Regards Andre
Hi,    I would like to ask to how change all user timezone to Pacific time. I did some research and see people recommend to config this file - SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf.... See more...
Hi,    I would like to ask to how change all user timezone to Pacific time. I did some research and see people recommend to config this file - SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf. But from what I know, or at learst how my Splunk was set up. It's a Saas, they provided me a link to my domain and I start using it. I'm not quite sure where exactly is the mentioned file.   Thanks, Brian 
I am able to curl it from the splunk machine, but I am not able to do that on my endpoint. It seems port 8000 port on my endpoint is not allowed in my organization.
that's for the help I will play with the data model when I have time I noticed parts were giving any records in the preview unless I manually added the index. I tried adding the index to various s... See more...
that's for the help I will play with the data model when I have time I noticed parts were giving any records in the preview unless I manually added the index. I tried adding the index to various steps and enabling acceleration but it still didn't work  
@heathramos  yeah this is going to be a fun one. You've got data model issues, which is way more involved than just fixing a macro. Data models are these complex hierarchical things with par... See more...
@heathramos  yeah this is going to be a fun one. You've got data model issues, which is way more involved than just fixing a macro. Data models are these complex hierarchical things with parent/child datasets that need to be built and accelerated properly - it's a whole thing. Looking at that search, it's trying to pull from datamodel=pan_firewall with specific node relationships. If that's not set up right (or at all), nothing's going to work. And troubleshooting data models means digging into dataset structures, field mappings, acceleration status - it's honestly not a quick fix. If you need this dashboard working soon and it's important to the business, you might want to just work with Splunk ondemand services. They can sort out your data models properly instead of you spending days figuring out why the acceleration isn't working or why the field extractions are wrong. If you want to try,  spend  some time in Settings > Data Models, checking what's actually there vs what the dashboard expects. You'll probably end up either rebuilding data models from scratch or rewriting all these tstats searches to use regular SPL. It's more like -audit your entire Palo Alto data ingestion and modeling setup. If this Helps Please Upvote.  
I see searches like the following:  | tstats summariesonly=t values(log.flags) AS log.flags, count FROM datamodel=pan_firewall WHERE nodename="log.url" """" log.action="*" GROUPBY _time log.dest_nam... See more...
I see searches like the following:  | tstats summariesonly=t values(log.flags) AS log.flags, count FROM datamodel=pan_firewall WHERE nodename="log.url" """" log.action="*" GROUPBY _time log.dest_name log.app:category log.app log.action log.content_type log.vendor_action | rename log.* AS * | stats sum(count) AS count values(app) AS app values(category) AS category BY dest_name | table dest_name app category count | sort -count  
Hello Dinesh Please check the below docs and if you still have questions please create a case with AppDynamics Support(https://mycase.cloudapps.cisco.com/  https://docs.appdynamics.com/appd... See more...
Hello Dinesh Please check the below docs and if you still have questions please create a case with AppDynamics Support(https://mycase.cloudapps.cisco.com/  https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/ne... https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/de...
@heathramos  Since the p_index macro doesn't exist, here's how to dig into the dashboard and fix it: Edit the dashboard directly: Go to your  dashboard Click the Edit button (top right) Click o... See more...
@heathramos  Since the p_index macro doesn't exist, here's how to dig into the dashboard and fix it: Edit the dashboard directly: Go to your  dashboard Click the Edit button (top right) Click on each panel/visualization that's not showing data Click Edit Search for each one magnifying glass You'll see searches that start with `p_index` sourcetype="pan:xdr_incident" Replace the macro with your actual index: Change `p_index` to index=pan (or whatever your actual Palo Alto index is called) So the search becomes: index=pan sourcetype="pan:xdr_incident" Do this for every search in the dashboard. There are probably 8-10 different searches based on your dashboard config. While you're in there, also check: Does sourcetype="pan:xdr_incident" match your actual data? Run index=pan | stats count by sourcetype first to confirm If your sourcetype is different (like pan:incident or cortex:xdr), update those too  
I see a pan_index macro among others
Hi @heathramos  Okay so this is installed, you should be able to see the macros in them - Are you able to see any of the previously mentioned macros when in the app?  
@livehybrid I knew that you probably knew that but it's worth posting so that the knowledge is spread Yes, the rule about avoiding indexed fields is sound as a general rule of thumb but of course... See more...
@livehybrid I knew that you probably knew that but it's worth posting so that the knowledge is spread Yes, the rule about avoiding indexed fields is sound as a general rule of thumb but of course in some well thought cases indexed fields can bring tremendous gains in search speed. Especially if you're not searching for particular values but just want to get aggregations which you can achieve with tstats. That's one of the valid reasons for using indexed fields sometimes. And yep, due to how they are internally represented, there is no way to distinguish between an indexed field and a simple indexed term with :: in the middle. I haven't worked with AWS logs but I did notice that in other cases.
@PickleRick I absolutely agree, although maybe I kept my justification too short! Having high cardinality in indexed fields could be a big problem, and would not be advised (this annoys me about IND... See more...
@PickleRick I absolutely agree, although maybe I kept my justification too short! Having high cardinality in indexed fields could be a big problem, and would not be advised (this annoys me about INDEXED_EXTRACTION=json) - but indexed fields used well can drastically improved performance if users update searches to use tstats.  Side note - Have you ever noticed that AWS logs often create high cardinality indexed fields for things like S3 objects because of the double :: in the string? e.g. arn:aws:s3:::my-example-bucket/images/photo.jpg becomes arn:aws:s3 with value ":my-example-bucket/images/photo.jpg" and can end up being pretty high cardinality! @nordinethales Has something been raised to your attention that leads you to think that you have large tsidx files which are causing performance issues? Are you defining indexed fields at ingest time for some of your data (e.g. INDEXED_EXTRACTIONS=json or other approaches e.g. INGEST_EVAL) If you're planning to reduce the number of indexed fields then its worth ensuring no searches are currently using them.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Splunk Support can get access to the file system if they don't have, but they no doubt will ask "Why do you *need* to know?".   I, too, wonder why you *need* to know.  Splunk Cloud is a service and ... See more...
Splunk Support can get access to the file system if they don't have, but they no doubt will ask "Why do you *need* to know?".   I, too, wonder why you *need* to know.  Splunk Cloud is a service and how that service is provided shouldn't matter as long as you get what you pay for. What problem are you trying to solve?
Are you curling from the Splunk machine or your endpoint? The site is taking too long to respond happens in two cases: 1) You're not using a proxy server and the server you're trying to reach doesn... See more...
Are you curling from the Splunk machine or your endpoint? The site is taking too long to respond happens in two cases: 1) You're not using a proxy server and the server you're trying to reach doesn't respond to your connection requests - most probably there is something filtering the network traffic between your desktop/laptop and the server. 2) You are using a proxy server and something is filtering the traffic betwen the proxy server and the destination server (or there is no routing between those servers). Filtering is actually quite probable since you're trying to reach the default 8000 port, which is not a standard http(s) port so it might not be allowed in your organization. Most probably you'll need to debug it with your network admin. You can try to verify with tcpdump/wireshark on the splunk server's side whether the initial SYN packet even reaches the server.
Actually, indexing too many fields can have negative impact on performance since you're bloating your tsidx files heavily. Remember that for indexed fields you're creating a separate lexicon entry fo... See more...
Actually, indexing too many fields can have negative impact on performance since you're bloating your tsidx files heavily. Remember that for indexed fields you're creating a separate lexicon entry for each key-value pair for indexed fielda. It might not be that heavy for fields for which number of possible values is low (like true/false, accept/reject/drop and so on) but for indexed extractions from json data where both field names can be long as well as values can be unique long strings - you're growing your lexicon greatly so even when you're using efficient searching methods (I don't know exactly what Splunk uses internally but I'd expect something akin to btrees or radix trees) the amount of data you have to dig through increases due to sheer size of the set you're searching.