All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a simple search with a sort command at the end as follows: .... some base search | dedup id | table id, name | sort -id and I'm being presented with the following results: 1876000... See more...
I have a simple search with a sort command at the end as follows: .... some base search | dedup id | table id, name | sort -id and I'm being presented with the following results: 18760000000000166 2020 Summer 18760000000000168 2020 Fall 18760000000000167 2020-2021 Academic Year 18760000000000164 2020 Winter 18760000000000165 2020 Spring 18760000000000163 2019 Fall 18760000000000131 2019-2020 Academic Year 18760000000000127 2019 Spring 18760000000000129 2019-2020 Academic Year 18760000000000130 2018-2019 Academic Year I expected the results to have been ordered by id (descending order). What am I missing?
Hi All, Is there a way to configure a Splunk dashboard to automatically deselect specific checkboxes if one is already selected? For example, let's say I have 4 checkboxes in my dashboard (checkb... See more...
Hi All, Is there a way to configure a Splunk dashboard to automatically deselect specific checkboxes if one is already selected? For example, let's say I have 4 checkboxes in my dashboard (checkbox1, checkbox2, checkbox3, checkbox4). I want to create a rule like this: "When checkbox1 is selected, always deselect checkbox2 and checkbox4."
We have a single-site indexer cluster with 2 indexers and one cluster master. We are seeing some issues related to one of the indexers which are not getting replicated after a reboot of the s... See more...
We have a single-site indexer cluster with 2 indexers and one cluster master. We are seeing some issues related to one of the indexers which are not getting replicated after a reboot of the search head. Splunk settings and conditions: Splunk Version: 6.3.1 SF/RF are not met Clustering: single-site Each indexer and master has 12 cores and sufficient memory of 1TB Ulimit = 102400 TH is disabled /opt/splunk/bin/splunk show cluster-status Replication factor not met Search factor not met All data is searchable Indexing Ready = YES reporting2.com-2-slave 05035B22-ECA4-4514-96AC-BE3BDF626D84 default Searchable YES Status Up Bucket Count=713 reporting1.com-1-slave E7B1F3CE-FE08-454D-B41D-ED0346DE3671 default Searchable YES Status Up Bucket Count=686 Telnet to both the indexers shows: Connected to 10.XXX.XX Things I tried: Rebooting both of the indexers together. Tuning some parameters in server.conf as heartbeat_timeouot=600 onCM heartbeat_period = 10 on peers.alt text Haven't done any reporting data restore. The data was freshly indexed. Cluster master logs: id=ib_threatdb_a~33~05035B22-ECA4-4514-96AC-BE3BDF626D84 tgtGuid=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 tgtHP=169.XXX.0.4:7089 tgtRP=7887 useSSL=false 04-29-2020 11:02:18.094 +0000 INFO CMMaster - replication error src=05035B22-ECA4-4514-96AC-BE3BDF626D84 tgt=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 failing=tgt bid=ib_threatdb_a~33~05035B22-ECA4-4514-96AC-BE3BDF626D84 04-29-2020 11:02:18.094 +0000 INFO CMReplicationRegistry - Finished replication: bid=ib_threatdb_a~33~05035B22-ECA4-4514-96AC-BE3BDF626D84 src=05035B22-ECA4-4514-96AC-BE3BDF626D84 target=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 04-29-2020 11:02:18.094 +0000 INFO CMPeer - peer=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 peer_name=reporting1.com-1-slave transitioning from=Up to=Pending reason="non-streaming failure" 04-29-2020 11:02:18.094 +0000 INFO CMMaster - event=handleReplicationError bid=ib_threatdb_a~33~05035B22-ECA4-4514-96AC-BE3BDF626D84 tgt=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 peer_name=reporting1.com-1-slave msg='target doesn't have bucket now. ignoring' After some time starting to see these logs getting flooded: 04-28-2020 15:43:08.302 +0000 INFO CMPeer - peer=05035B22-ECA4-4514-96AC-BE3BDF626D84 peer_name=reporting2.com-2-slave transitioning from=Pending to=Up reason="heartbeat received." Reporting 1 indexer logs: 04-29-2020 14:03:17.588 +0**000 ERROR RawdataHashMarkReader - Error parsing rawdata inside bucket** path="/opt/splunk/var/lib/splunk/ib_threatdb_a/db/rb_1588024289_1588024289_1848_E7B1F3CE-FE08-454D-B41D-ED0346DE3671": msg="Bad opcode: 2B" 04-29-2020 14:03:17.588 +0000 INFO BucketReplicator - Created asyncReplication task to replicate bucket ib_threatdb_a~1848~E7B1F3CE-FE08-454D-B41D-ED0346DE3671 to guid=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 host=169.254.0.4 s2sport=7887 bid=ib_threatdb_a~1848~E7B1F3CE-FE08-454D-B41D-ED0346DE3671 04-29-2020 14:03:17.588 +0000 INFO BucketReplicator - event=startBucketReplication bid=ib_threatdb_a~1848~E7B1F3CE-FE08-454D-B41D-ED0346DE3671 04-29-2020 14:03:17.588 +0000 WARN BucketReplicator - **Failed to replicate warm bucket bid=ib_threatdb_a~1850~E7B1F3CE-FE08-454D-B41D-ED0346DE3671 to guid=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 host=169.254.0.4 s2sport=7887.** Connection closed. 04-29-2020 14:03:17.588 +0000 INFO CMReplicationRegistry - Finished replication: bid=ib_threatdb_a~1850~E7B1F3CE-FE08-454D-B41D-ED0346DE3671 src=05035B22-ECA4-4514-96AC-BE3BDF626D84 target=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 04-29-2020 14:03:17.588 +0000 INFO CMSlave - bid=ib_threatdb_a~1850~E7B1F3CE-FE08-454D-B41D-ED0346DE3671 src=05035B22-ECA4-4514-96AC-BE3BDF626D84 tgt=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 failing=E7B1F3CE-FE08-454D-B41D-ED0346DE3671 queued replication error job Reporting 1 search head logs: 04-29-2020 11:13:09.778 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.XXX.23:9997 timed out 04-29-2020 11:13:39.941 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.XXX.23:9997 timed out 04-29-2020 11:14:39.859 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.XXX.23:9997 timed out Continuously seeing these logs. Reporting 2 indexer logs: 04-29-2020 11:29:36.254 +0000 ERROR TcpInputProc - event=replicationData status=failed err="Could not open file for bid=ib_threatdb_a~29~05035B22-ECA4-4514-96AC-BE3BDF626D84 err="Cannot find config for idx=ib_threatdb_a" (No such file or directory)" 04-29-2020 11:29:36.257 +0000 ERROR TcpInputProc - event=replicationData status=failed err="Could not open file for bid=ib_threatdb_a~30~05035B22-ECA4-4514-96AC-BE3BDF626D84 err="Cannot find config for idx=ib_threatdb_a" (No such file or directory)" 04-29-2020 11:29:38.089 +0000 INFO ClusterMasterPeerHandler - master is not enabled on this node 04-29-2020 11:29:40.612 +0000 INFO ClusterMasterPeerHandler - master is not enabled on this node 04-29-2020 11:29:44.279 +0000 ERROR TcpInputProc - event=replicationData status=failed err="**Could not open file for bid=ib_threatdb_a~31~05035B22-ECA4-4514-96AC-BE3BDF626D84 err="Cannot find config for idx=ib_threatdb_a" (Success)"** 04-29-2020 11:29:44.938 +0000 INFO ClusterMasterPeerHandler - master is not enabled on this node 04-29-2020 11:29:45.488 +0000 INFO ClusterMasterPeerHandler - master is not enabled on this node 04-29-2020 11:29:51.481 +0000 INFO ClusterMasterPeerHandler - master is not enabled on this node 04-29-2020 11:29:52.300 +0000 ERROR TcpInputProc - event=replicationData status=failed err="Could not open file for bid=ib_threatdb_a~32~05035B22-ECA4-4514-96AC-BE3BDF626D84 err="Cannot find config for idx=ib_threatdb_a" (Success)" 04-29-2020 11:29:53.124 +0000 INFO ClusterMasterPeerHandler - master is not enabled on this node Reporting 2 search head logs: 04-29-2020 10:53:19.465 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.107.28:9997 timed out 04-29-2020 10:53:49.467 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.107.28:9997 timed out 04-29-2020 10:54:19.245 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.107.28:9997 timed out 04-29-2020 10:54:49.245 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.107.28:9997 timed out 04-29-2020 10:56:49.243 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.107.28:9997 timed out 04-29-2020 10:57:19.244 +0000 WARN TcpOutputProc - Cooked connection to ip=10.196.107.28:9997 timed out 04-29-2020 10:57:29.528 +0000 FATAL ProcessRunner - Unexpected EOF from process runner child! 04-29-2020 10:57:29.528 +0000 ERROR ProcessRunner - helper process seems to have died (child killed by signal 15: Terminated)!
HI, I am trying to implement customized chart views, to state the issue I have static multi select input with token "host" with ALL has a value., the query has time chart some thing | timec... See more...
HI, I am trying to implement customized chart views, to state the issue I have static multi select input with token "host" with ALL has a value., the query has time chart some thing | timechart span=15m avg(a1) as "session_current" by host , now when ALL is selected from multi select input i see all the host in the legends in chart that ALL has, but instead of showing the values i want to show only "WEST" in the chart legends
Hello all, I am trying to remove the time portion of the string value of a field that resides in our indexed data. The expiration field contains a string value as shown below as a date and time. ... See more...
Hello all, I am trying to remove the time portion of the string value of a field that resides in our indexed data. The expiration field contains a string value as shown below as a date and time. Query: sourcetype="db" unique_id="00-201" expiration="*" | eval mytime=strftime(strptime(expiration, "%m/%d/%Y"),"%m/%d/%Y") | table unique_id expiration | dedup unique_id Results: unique_id expiration 00-201 2022-08-12 00:00:00.0 Goal: Perform a query to display expiration field value as 06-04-2022 and sort from oldest to newest
Hello, dear Splunkers, We want to deploy Splunk in our company and one of our important concerns is High Availability. Would you please suggest me an architecture that covers HA for all Splunk co... See more...
Hello, dear Splunkers, We want to deploy Splunk in our company and one of our important concerns is High Availability. Would you please suggest me an architecture that covers HA for all Splunk components? My main concern is about UDP Syslogs from network devices. (we don't have any network load balancer device.) In our initial plan, we are going to use indexer clustering and autoLB configuration on UFs, but we don't know how to handle UDP Syslog inputs, License Manager, and Deployment Server and other components high availability. Thank you.
Hi everyone! We've moved some of heavy lookups to kv store and now they work faster and more stable. But one of them barely can calculate something, being inserted in a search with lookup even on... See more...
Hi everyone! We've moved some of heavy lookups to kv store and now they work faster and more stable. But one of them barely can calculate something, being inserted in a search with lookup even on 24 hour span, while it easily loads with inputlookup . Accelerated field is a hexadecimal uid of 16 symbols. To make it easier for splunk to accelerate it, we converted it in a bigint and accelerated on this new field. Still the same slowness. It was rewritten but it didn't help. And its csv version lookups much faster, but we had some issues with updating large lookups so we moved to kv. Slow-lookuped kv has 10mln entries and around 5 fields. Fast-lookuped kv with 40mln entries and 10 fields works fine (with accelerated field also converted from hex to bigint) Server: 251.81 GB Physical Memory, 20 CPU Cores in limits.conf: max_threads_per_outputlookup = 10 in server.conf: oplogSize = 2000 What can cause such slowness? What params can we tune?
Domain controllers have a forwarder with the TA-Windows deployed via server class. Splunk App for Windows Infrastruture on the search head. We get wineventlog, some AD user record changes, logins, ... See more...
Domain controllers have a forwarder with the TA-Windows deployed via server class. Splunk App for Windows Infrastruture on the search head. We get wineventlog, some AD user record changes, logins, etc. However, Things like domain controller health and OU, nor domain drop downs in some dashboards are not populating. During the detect features configuration, Domains, Domain Controllers, and DNS do not get detected.
Hi All, ** Summary ** I have windows logs for remote VPN access. I want to be able to graph concurrent use by user. But the problem is : example - I have one log EventCode=123 which is a remo... See more...
Hi All, ** Summary ** I have windows logs for remote VPN access. I want to be able to graph concurrent use by user. But the problem is : example - I have one log EventCode=123 which is a remote connection that occurs at 2pm for instance and EventCode=321 which is a disconnection that occurred at 5pm. Between the fields there are no logs so timecharting comes back with a 1 on 2pm and a 1 on 5pm but 0 on the hours between...I want to have a count on the hours between to show that the session was active. Is there a way to do this?
Hello, I am trying to collect key data metrics regarding search volumes accessing the varying tiers of Splunk Storage Buckets. For example, I would love to see a report that says xx% of sear... See more...
Hello, I am trying to collect key data metrics regarding search volumes accessing the varying tiers of Splunk Storage Buckets. For example, I would love to see a report that says xx% of searches are accessing warm buckets only xx% of searches are accessing both warm and cold buckets So far the only hope I've seen thus far is watching Disk IO volume for the cold mounts to get a idea of "how busy it is", and I am hoping for better information from Splunk Itself.
Why does the following string work: url=*string1* OR url=*mystring2* But, this one does not work? url in (*mystring1*, *mystring2*)
I was trying to filter event ID in subsearch and then use it in the main search to find other events with related ID and compare time from subsearch with last event time from the main search. The ... See more...
I was trying to filter event ID in subsearch and then use it in the main search to find other events with related ID and compare time from subsearch with last event time from the main search. The initial line when ID appears is: 2020-04-29 16:14:08,637 backend_7.2.15: INFO services/ConnectionManagerService(backend): \ncreations: 1262172\nupdates: \ncancellations: 1261482-1 one of the problem is that above event ID's can appear after decimal, like below: 2020-04-29 16:14:08,791 backend_7.2.15: INFO services/ConnectionManagerService(backend): \ncreations: 1262174,1262175,1262176\nupdates: \ncancellations: 1261438-1,1261436-1,1261440-1 confirmation line - last: 10.21.160.144.SwitchingCore/openflowConfig! (Config success!). New contributors: Set(book.1262175-1, book.1262174-1, book.1262176-1), removed contributors: Set(book.1261438-1, book.1261440-1, book.1261436-1). My query: ....... sourcetype=main ConfigurationManagerService |append [search ................sourcetype=main "ConnectionManagerService(backend)" "\ncreations:" | multikv noheader=t | rex "(?:ions: )(?\d{7})" | where ID != 0 | rename _time as start_time | table ID start_time] | stats earliest(start_time), latest(_time) as stop by ID How to make it more efficient or just working? Part of the log: 2020-04-29 16:19:13,082 backend_7.2.15: INFO services/ConnectionManagerService(backend): \ncreations: 1262180\nupdates: \ncancellations: 1258780-1 2020-04-29 16:14:10,479 backend_7.2.15: INFO services/ConfigurationManagerService(backend): Successfully applied config for 1.......SwitchingCore/rpfPortConfig! (Config success!). New contributors: Set(book.1262174-1, book.1262176-1), removed contributors: Set(). 2020-04-29 16:14:09,498 backend_7.2.15: INFO services/ConfigurationManagerService(backend): Successfully applied config for 1....70000/igmpPortConfig! (Config success!). New contributors: Set(book.1262174-1, book.1262176-1), removed contributors: Set(). 2020-04-29 16:14:09,442 backend_7.2.15: INFO services/ConfigurationManagerService(backend): Successfully applied config for 1.....10002/igmpPortConfig! (Config success!). New contributors: Set(book.1262176-1), removed contributors: Set(). 2020-04-29 16:14:09,438 backend_7.2.15: INFO services/ConfigurationManagerService(backend): Successfully applied config for 1......70000/igmpPortConfig! (Config success!). New contributors: Set(book.1262175-1), removed contributors: Set(). 2020-04-29 16:14:09,388 backend_7.2.15: INFO services/ConfigurationManagerService(backend): Successfully applied config for 1.......SwitchingCore/openflowConfig! (Config success!). New contributors: Set(book.1262175-1, book.1262174-1, book.1262176-1), removed contributors: Set(book.1261438-1, book.1261440-1, book.1261436-1). 2020-04-29 16:14:09,314 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1.........70000/igmpPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262174-1, book.1262176-1), removed contributors: Set() 2020-04-29 16:14:09,313 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1......70000/igmpPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262175-1), removed contributors: Set() 2020-04-29 16:14:09,313 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1......SwitchingCore/rpfPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262176-1), removed contributors: Set() 2020-04-29 16:14:09,308 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1..........SwitchingCore/rpfPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262174-1, book.1262176-1), removed contributors: Set() 2020-04-29 16:14:09,306 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1.........SwitchingCore/openflowConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262175-1, book.1262174-1, book.1262176-1), removed contributors: Set(book.1261438-1, book.1261440-1, book.1261436-1) 2020-04-29 16:14:09,305 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1........SwitchingCore/rpfPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262175-1), removed contributors: Set() 2020-04-29 16:14:09,303 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1.......10002/igmpPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262176-1), removed contributors: Set() 2020-04-29 16:14:09,302 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1........SwitchingCore/openflowConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262175-1, book.1262174-1), removed contributors: Set() 2020-04-29 16:14:09,300 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1........SwitchingCore/openflowConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262176-1), removed contributors: Set() 2020-04-29 16:14:08,914 backend_7.2.15: INFO services/ConfigurationManagerService(backend): Successfully applied config for 1........SwitchingCore/openflowConfig! (Config success!). New contributors: Set(book.1262172-1), removed contributors: Set(book.1261482-1). 2020-04-29 16:14:08,837 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1.......SwitchingCore/openflowConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262172-1), removed contributors: Set(book.1261482-1) 2020-04-29 16:14:08,836 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1........SwitchingCore/openflowConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262172-1), removed contributors: Set(book.1261482-1) 2020-04-29 16:14:08,835 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1.......70000/igmpPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262172-1), removed contributors: Set(book.1261482-1) 2020-04-29 16:14:08,835 backend_7.2.15: INFO services/ConfigurationManagerService(backend): ControlledVertexFSM@1........SwitchingCore/rpfPortConfig: New config retrieved by Root state with delay None, new contributors: Set(book.1262172-1), removed contributors: Set(book.1261482-1) 2020-04-29 16:14:08,791 backend_7.2.15: INFO services/ConnectionManagerService(backend): \ncreations: 1262174,1262175,1262176\nupdates: \ncancellations: 1261438-1,1261436-1,1261440-1 2020-04-29 16:14:08,637 backend_7.2.15: INFO services/ConnectionManagerService(backend): \ncreations: 1262172\nupdates: \ncancellations: 1261482-1
While field values are not case sensitive by default on Splunk, when we use lookups the default setting for the field values is to be case sensitive. I can't think of any valid use case of that i... See more...
While field values are not case sensitive by default on Splunk, when we use lookups the default setting for the field values is to be case sensitive. I can't think of any valid use case of that inconsistency, is there any reason that I could possibly be missing? note: I am aware that you can overwritte the case sensitivity setting when importing a lookup, I am merely wondering why doen't the default option for lookup field values align with the overall Splunk logic of field values being non case sensitive.
I have a set of data like the below total=2000 date=2020-04-29 total=1975 date=2020-04-28 total=1951 date=2020-04-27 What I want to produce is a chart that shows the difference per ... See more...
I have a set of data like the below total=2000 date=2020-04-29 total=1975 date=2020-04-28 total=1951 date=2020-04-27 What I want to produce is a chart that shows the difference per day of these totals i.e as per the below total difference = 25 date=2020-04-29 total difference=24 date=2020-04-28 total=33 date=2020-04-27 etc I need a calculation of the difference per day my raw data already has the total and date in so its a straight calculation from that data
Good afternoon, I have text in a lookup.csv that has hard returns in it, for example: This is the reason why the sun is hot: reason 1. reason 2. reason 3. But the result from the look... See more...
Good afternoon, I have text in a lookup.csv that has hard returns in it, for example: This is the reason why the sun is hot: reason 1. reason 2. reason 3. But the result from the lookup turns the hard returns into spaces turning it into a single long sentence, for example: This is the reason why the sun is hot: reason 1. reason 2. reason 3. Is there a way to keep the hard returns (special characters in the CSV) or to put them in post lookup? Many thanks for your time.
I have below query and it should gives result of time filter of last four hours (or) last 24 hours. |makeresults |bucket _time span=1h|stats count by _time But it giving only latest hour instead ... See more...
I have below query and it should gives result of time filter of last four hours (or) last 24 hours. |makeresults |bucket _time span=1h|stats count by _time But it giving only latest hour instead of 4 records for last four hours filter (or) 24 records for last 24 hours filter. Kindly help us.
Hello, Splunk App for CEF is installed on Splunk HF, I did all the field mapping to the Log which is required for CyberArk PTA to detect. but not sure why it isn't detecting? earlier before ... See more...
Hello, Splunk App for CEF is installed on Splunk HF, I did all the field mapping to the Log which is required for CyberArk PTA to detect. but not sure why it isn't detecting? earlier before spunk, we use to have Arcsight and the logs were used to come in CEF format and CyberArk PTA used to detect. Now, having Splunk App for CEF which means logs are coming in CEF format as similar to Arcsight CEF format logs but don't know the reason why CyberArk PTA is not detecting. Taken this issue with CyberArk, even they doesn't know. Can anyone help here please? Regards, Arjun
hello, In my dashboard, depending on the search result I'd like to set\unset tokens, these token are then used in depending panels to decide if they are displayed or not (see the code in attachmen... See more...
hello, In my dashboard, depending on the search result I'd like to set\unset tokens, these token are then used in depending panels to decide if they are displayed or not (see the code in attachment). My code doesn't work and I don't know why. Could you help me please? Regards, Magali
I have deployed Data collection scripts on Citrix machine and UF is running as Splunk local system user. this user does have local admin rights. Though script is not able to collect the data and send... See more...
I have deployed Data collection scripts on Citrix machine and UF is running as Splunk local system user. this user does have local admin rights. Though script is not able to collect the data and send it to indexer. I can see below errors on splunk _internal logs. Get-BrokerSession : Insufficient administrative privilege + FullyQualifiedErrorId : Citrix.XDPowerShell.Broker.AccessDenied,Citrix.B Can anyone help me to resolve the issue.
i have set of users x,y,z and few url regex a,b,c. I need to know how many time these users hit the url regex in chart format. users in y axis and urls in x axis. Any help?